content
stringlengths 275
370k
|
---|
- Take the worksheet “The shape of a riverbed” and have a discussion with students about the living conditions in a river:
- What form of riverbed is characteristic for each of the river sections?
- What are the characteristics of the habitats in each section of a river?
- Use the worksheet “Life in a river” and look at the pictures. Explain to students what the most important living conditions in a specific section of a river are. Ask students to use worksheet Adaptation to the river conditions and make notes.
- In the upper section of a river, life is mainly connected to the bottom of a riverbed, where the water current is the slowest. Animals have specific adaptations of their body parts to be able to hold on to the bottom (some nymphs develop hooks at the tip of their legs, ), or they adapt certain types of behavior (some larvae cover their body with small stones, some fish stay behind large stones and jump into the water current only for food.
- In the middle section, where the speed of water current is lower, the pebble is covered by algae that create a good living environment for various animals, and plankton develops.
- In the lower section, where the speed of water current is very slow, plants and animals inhabit the whole water space (floating and hovering creatures).
- Go with students to the nearest stream and determine what the life in that stream is.
This post is also available in: English (Angleščina)
|
Genetic drift occurs when the genetic state of a population (usually represented in a theoretical model by the frequency of one or more alleles) changes randomly from one generation to the next because of sampling error. (A more detailed discussion is presented in the node genetic drift.) There is some probability that the frequency of adeleterious mutation will increase above the equilibrium level imposed by selection and mutation (the mutation-selection balance) –– a level that is only consistently realised in a population of infinite size; i.e. that does not experience genetic drift (see mutation load). When the drift of deleterious mutations causes a reduction in the mean fitness of a population on average, then there is a drift load.
The drift load can be partitioned into being segregational or fixed. If the population remains polymorphic, such that fitter alleles coexist with the deleterious mutant allele, then the load is segregational. At a particular locus, the reduction of mean fitness is a transitory state; put simply, the frequency of the mutant allele may later drift low enough to restore and even increase the contribution of that locus to mean fitness. On the other hand, if the mutant allele has fixed in the population, then the genetic state of that locus is rendered constant until another mutation arises.
Because of its non-transitory nature, it is the fixed drift load that can cause a cumulative decline in mean fitness. This is often refered to as a mutational meltdown process.
|
When an organism grows, stem cells specialize, and take specific functions. For instance, mature tissues like skin, muscle, blood, bone, liver, nerves, all have different types of cells. Because stem cells are not yet differentiated, they can change to become some kind of specialized cells. Organisms also use stem cells to replace damaged cells.
The two broad types of mammalian stem cells are embryonic stem cells, and adult stem cells, which are found in adult tissues. In a developing embryo, stem cells can differentiate into all of the specialised embryonic tissues. In adult organisms, stem cells act as a repair system for the body, replenishing specialized cells, but also maintain the normal turnover of blood, skin, and intestinal tissues.
Stem cells can be grown in tissue culture. In culture, they can be transformed into specialised cells, such as those of muscles or nerves. Highly plastic adult stem cells can be taken from a variety of sources, including umbilical cord blood and bone marrow. They are now used in medical therapies, and researchers expect that stem cells will be used in many future therapies.
Embryonic stem cellsEdit
Embryonic stem cells (ES cells) are stem cells taken from the inner cell mass of the early stage embryo called a blastocyst. Human embryos reach the blastocyst stage 4-5 days after fertilization. At that time, they are made up of between 50 and 150 cells.
The stem cells' state, and what the daughter cells turn into, is influenced by signals from other cells in the embryo.
Adult stem cellsEdit
Adult stem cells exist throughout body after embryonic development has completed. They are found inside different types of tissue and remain in a non-dividing state until disease or tissue injury. They have changed permanently into specialised cells and have lost the ability to divide and specialise further. An advantage of adult stem cells is that it provides a lesser probability of rejection. A disadvantage would be that there is a limited availability to get them.
Plant stem cellsEdit
Plant stem cells are unspecialised, and can develop into any type of plant cell. They become specialised into cells of roots, leaves or flowers. The stem cells are found in meristem, at root and stem apices. They make it possible for growth to continue.
These are plants which have been grown from cuttings (therefore the same plant). They are dipped in hormone rooting powder to develop bigger root systems. They turn into tissues (eg. xylem and phloem) and organs (eg. roots, leaves and flowers), therefore becoming a completely new plant.
Cloning plants is a cheap way of producing a new plant.
Stem cells in medicineEdit
Some cancers may be treated by stem cells. Leukemia, a cancer of white blood cells (WBC) is an example.
There are two stages to this process:
- King R.C. Stansfield W.D. & Mulligan P.K. 2006. A dictionary of genetics, 7th ed. Oxford. p425.
- Becker AJ, McCulloch EA, Till JE (1963). "Cytological demonstration of the clonal nature of spleen colonies derived from transplanted mouse marrow cells". Nature. 197: 452–4. doi:10.1038/197452a0. PMID 13970094.CS1 maint: multiple names: authors list (link)
- Siminovitch L, McCulloch EA, Till JE (1963). "The distribution of colony-forming cells among spleen colonies". Journal of Cellular and Comparative Physiology. 62: 327–36. doi:10.1002/jcp.1030620313. PMID 14086156.CS1 maint: multiple names: authors list (link)
- Tuch BE (2006). "Stem cells—a clinical update". Australian Family Physician. 35 (9): 719–21. PMID 16969445.
|
Breast cancer is one of the most common cancers in women in the United States. It’s the second most common cause of cancer death in women, and the main cause in women 45 to 55 years old. The death rate has decreased in the past 20 years, partly because better screening catches the disease earlier, so chances of recovery and cure are higher.
Early breast cancer usually causes no pain and usually no other symptoms.
As a breast tumor grows, certain changes may occur. These include a lump or thickening (mass, swelling, skin irritation, or distortion) in or near the breast or under the arms or changes in breast size or shape. The color or feel of the skin of the breast, areola, or nipple (dimpled, puckered, or scaly) can change. Women can also have discharge, erosion, inversion, or tenderness of nipples.
Many women (or doctors during the physical exam) feel a lump or finds breast changes. An abnormal mammogram can suggest breast cancer. Some women at high risk of getting breast cancer may get magnetic resonance imaging (MRI) for screening.
A lump shouldn’t be ignored. Mammograms may not show breast cancers in nearly 20% of cases.
When cancer is suspected, the doctor will remove (biopsy) a small piece of abnormal area for study.
The doctor must stage the cancer. Staging determines how far it has spread, to decide on treatment and prognosis. The stage of the cancer is based on tumor size, whether skin, chest wall, and local lymph nodes (glands) are involved, and whether cancer has spread to other organs (metastasis). The biology of the cancer is based on the look of the cancer under the microscope and the tumor’s protein and gene markers.
These things help doctors choose the treatment: surgery, radiotherapy, and medical therapies, such as antiestrogens, chemotherapy, or other biological or targeted therapies.
Most women have surgery to remove the cancer, such as lumpectomy (removing only the lump). Operations to remove the breast, part or whole, include partial (segmental), modified radical, radical, or total (simple) mastectomies. Lymph nodes and muscles may be removed. Radiation therapy, chemotherapy, or hormone therapy can be used before or after surgery. Breasts can be rebuilt after surgery.
Radiotherapy uses high-energy x-rays to kill cancer cells. Radioactive substances in needles, devices known as "seeds," wires, or catheters can be put into or near the cancer.
Chemotherapy uses drugs to kill cancer cells or stop them from dividing. Drugs can be taken by mouth, injected into veins or muscles, or placed near cancer cells.
Hormone therapy (tamoxifen and aromatase inhibitors) stops hormones, especially estrogen, from helping cancer cells grow. Biological therapy works by using the body’s immune (infection-fighting) system to kill cancer cells.
Contact the following sources:
|
Last updated November 15, 2019 at 3:31 pm
Scientists crack egg mystery, finding dark coloured bird eggs have an advantage in colder areas as they absorb more heat.
Why This Matters: Sometimes the reason is straightforward.
An egg’s ability to maintain temperature within strict limits is critical to the survival of a developing bird embryo, but the role that eggshell colour plays in maintaining thermal balance has been a long-standing question.
Now, a study published in Nature Ecology & Evolution has found that birds living in cold climates and with open nests tend to have eggs with darker shells. The darker pigmentation allows the egg to maintain its internal temperature for longer when exposed to the sun, the research suggests.
Bird eggs come in a breathtaking range of colours and patterns, but the major drivers of this variation are unclear.
Darker bird eggs absorb more heat in cold regions
Darker pigments absorb more heat than lighter pigments, so it follows that darker shells may be better in cold regions. But darker pigments also filter harmful ultraviolet radiation, which is stronger in warmer regions.
Similarly, darker pigments have stronger anti-microbial properties, which would point towards suitability in warmer, humid areas. However, lighter pigments are more obvious to predators, which tend to be more abundant in hot regions.
The research was led by Phillip Wisocki from Long Island University Post in the US, and involved Phillip Cassey from University of Adelaide. The team sourced the eggs of 634 different bird species from natural history museum collections around the world.
They developed global patterns in eggshell colouration by measuring the brightness and colour of the eggs, then mapped the patterns onto each species’ geographic breeding range.
They found that eggs are significantly darker when both temperature and solar radiation are low, and where nesting takes place on open ground, rather than in cavities or cup-like nests.
The need for thermal control is main determining factor of colour
The researchers then exposed chicken, duck and quail eggs of varying colours and brightness to solar radiation. They found that darker eggs were able to maintain their incubation temperatures for longer than lighter-coloured eggs.
Taken together, these findings strongly suggest that thermal regulation may be the main factor determining an eggshell’s colour.
And that shouldn’t be the end of the story, the researchers suggest.
“Our study provides insight into the environmental pressures that have shaped the global distribution of colours and should serve as a call to reinvigorate this line of research and investigate patterns of colour expression in other organisms,” they write.
|
Explore the American Revolution using this rich collection of sources and documents that tell the history of a new nationSources in U.S. History Online: The American Revolution is a digital archive documenting the revolution and war that created the United States of America, from the Paris peace treaty in 1763 through the early protests in 1785 to the Paris peace treaty of 1783. The collection examines the political, social, and intellectual upheaval of the age, as well as the actual war for American independence through its eight long years of conflict. A wealth of material from the European point of view is included.
The archive tells the whole story of the American Revolution -- the experiences of commanders and common soldiers, women and slaves, American Indians and Loyalists are all recorded. A variety of primary source documents -- personal narratives and memoirs, political pamphlets and speeches, sermons and poems, legislative journals and popular magazines, maps and more -- cover the diversity of:
- Battles -- from the Battle of Bunker Hill to the siege of Yorktown
- Individuals -- from John Adams to Edmund Burke
- Organizations -- from the American Philosophical Society to the Whig Party
- Perspectives -- from the American loyalists to patriot preachers
- Places -- from Falmouth, England, to Fort Ticonderoga, New York
- Topics -- from agriculture to valor
- And more
Sources in U.S. History Online: The American Revolution allows researchers to examine economics, international relations, religion, and science as well as the strategies and battlefield realities of combatants on both sides of the conflict. The archive provides a rich sense of the causes and consequences of one of the great turning points in history.
Documents in Sources in U.S. History Online: The American Revolution were drawn from other Gale sources -- including the Lost Cause Press, Eighteenth Century Collections Online, The Making of the Modern World, The Making of Modern Law: Legal Treatises, and Sabin Americana, 1500-1926 -- under the editorial supervision of legal scholar Katherine A. Hermes, professor of history at Connecticut State University.
|
Antimicrobial use and antimicrobial resistance are complex issues, and there's a lot of confusion and misinformation in the media and on the Internet. These FAQs clear up some of the confusion and provide you with science-based information to help you make educated decisions about antibiotics and other antimicrobial drugs.
Q: What are microorganisms?
A: Microorganisms are living organisms that are too small to be seen individually by the naked eye. They include bacteria, viruses, protozoa, and some fungi and algae. You might see with your naked eye a group of these organisms – such as a mold growth on bread – but you need a microscope to see individual microorganisms.
Q: What are antimicrobials?
A: Antimicrobials [anti (against) + mikros (little) + bios (life)] are products that kill microorganisms or keep them from multiplying (reproducing) or growing. They can be either naturally occurring or synthetic (manmade) and are most commonly used to prevent, control, or treat diseases and infections caused by microorganisms. Various groups of antimicrobials kill different types of microorganisms: bacteria (antibacterial), fungi (antifungal), viruses (antiviral), or protozoa (antiprotozoal).
Q: What's the difference between an antibiotic and an antimicrobial?
A: Antibiotics [anti (against) + bios (life)] are antimicrobials that can kill bacteria or inhibit their growth or reproduction. All antibiotics are antimicrobials, but not all antimicrobials are antibiotics.
Penicillin is a classic example of an antibiotic: it is produced by Penicillium fungi and has the ability to kill a variety of bacteria. Therefore, it is an effective antibiotic when used appropriately to treat infections susceptible to it.
Q: What does "susceptible" mean when it comes to antimicrobials?
A: The term "susceptible" simply means that a microorganism is capable of being affected by the antimicrobial. For example, if a type of streptococcus bacteria is said to be susceptible to penicillin, it means that penicillin can either kill the bacteria, or slow or stop their growth.
Q: Are the antimicrobials used in animals the same ones used in people?
A: Antimicrobials and antibiotics are grouped in "classes" based on how they affect the bacteria, viruses and fungi they're used to combat. The vast majority of antibiotic classes are used in both people and animals. Only a few classes are specific to either human medicine or veterinary medicine. However, there are classes of antibiotics that are considered "medically important to human medicine."
Many antimicrobials used in human medicine are not approved for use in animals or are, quite simply, too expensive to use in animals.
Regarding the antimicrobials used in food production, some of those also are used in people, and some are not. Strict federal regulations govern the use of antimicrobials in food-producing animals, including the specific antimicrobials that can be used. The U.S. Food and Drug Administration (FDA) is responsible for approving antimicrobials and other medications for use in animals, including antimicrobials that may be added to the feed of food-producing animals. In addition, several states have enacted laws which limit how certain antimicrobials may be used and/or have a reporting requirement of veterinarians.
Q: What is antimicrobial resistance?
A: Antimicrobial resistance (including antibiotic resistance) occurs when a microorganism develops the ability to resist the action of an antimicrobial that previously affected it. Basically, the microorganism develops the ability to survive and reproduce despite the presence (and dose) of the antimicrobial.
"Resistance" can occur only in an organism that used to be susceptible to an antimicrobial's effects but now is not. The term doesn't apply to an organism that was never susceptible to that antimicrobial.
How resistance develops is a very complex process, and we don't really know all of the factors or events that can make it happen. We do know that an organism can undergo a change in its DNA that makes it resistant to one or more antimicrobials, and this change might be passed on to its offspring or transferred to another organism. The DNA change might just be a natural mutation, or it might be in response to something else, such as the use of antimicrobials.
Q: What causes antimicrobial resistance?
A: Current science can't really prove what causes all of the different types of antimicrobial resistance that create public health risks.
Antimicrobial resistance can be caused by "selection pressure." Regardless how effective an antimicrobial might be, rarely—if ever—will 100% of the organisms be killed during a course of treatment. This means that at least one organism out of thousands may have developed resistance to the antimicrobial. The few surviving and potentially resistant organisms could then transfer their genetic material to offspring or even to other unrelated organisms.
There are also some who say that antimicrobial resistance can be caused by widespread use of antimicrobials in animals. Their argument is that the more antimicrobials are used in animals, the more we expose the organisms to the antimicrobials and give them the opportunity to develop resistance. Although that may be true in a very simplified, general sense, the scientific evidence showing how, if or to what extent such exposure affects human health remains unclear.
The assumption that simply giving antimicrobials to a larger number of animals creates a public health hazard due to resistance isn't accurate, because it doesn't account for the benefits of preventing disease and the need for higher doses and potentially stronger types of antimicrobials if an animal is sick. A part of veterinary medical education is understanding how antimicrobials affect microorganisms and how they can be used responsibly to protect human and animal health.
Q: Is all antimicrobial resistance a threat to public health?
A: Antimicrobial resistance is only a threat to public health when humans are infected with a resistant organism that is difficult or impossible to treat. This is an issue seen more frequently with human pathogens transmitted between humans – such as extremely drug-resistant tuberculosis (XDRTB) and MRSA. While outbreaks of resistant foodborne pathogens have been reported, very few have been epidemiologically traced back to the farm. Even fewer have been traced to a specific antimicrobial use.
Q: How are antimicrobials used in animals?
A: Antimicrobials are generally used to prevent, control, or treat infection in animals much like they are used in human medicine. For example, a physician or veterinarian might administer or prescribe an antimicrobial to treat skin, bone, or systemic infections. They also might be used before surgery to prevent postoperative infection.
In research settings, antimicrobials are used to control or treat disease, just as they are in other animal populations. Antimicrobials are sometimes used in other ways unique to research, such as to create a model of disease for research purposes or to study how diseases develop. Research is also conducted on antimicrobials themselves to establish their pharmacologic activity or efficacy.
In food production systems, healthy animals make healthy food, and veterinarians are on the frontlines in keeping our nation's food supply safe. Advances in animal health care and management have greatly improved food safety over the years and have reduced the need for antimicrobials in food production. However, antimicrobials are an important part of the veterinarian's toolkit, and veterinarians agree that they should be used judiciously and in the best interest of animal health and public health.
The U.S. Food and Drug Administration (FDA) approves the use of antimicrobials for four purposes:
Preventing disease: There is a known disease risk present, and antibiotics are administered to prevent infection of animals.
Controlling disease: Disease is present in a percentage of a herd or flock, and antibiotics are administered to decrease the spread of disease in the flock/herd while clinically ill animals are treated.
Treating disease: Antibiotics are administered to treat sick animals.
Promoting growth / feed efficiency: Only antimicrobials that are not considered important to human health can be used in food animals for this purpose. Since the Veterinary Feed Directive took effect on Jan. 1, 2017, medically important antimicrobials cannot be used for growth promotion or feed efficiency.
Q: How do veterinarians decide what antimicrobials to use?
A: When antimicrobials are needed to treat an animal, veterinarians base their choices on many factors, including:
- Type of infection
- The organism causing the infection and its susceptibility to the antimicrobial
- How the antimicrobial is administered (for example, whether it's given orally or by injection) and how that will be tolerated by the animal
- Whether or not it is approved for use in that animal species
- Risk of side effects
Q: Where do food producers get antimicrobials for animals?
A: Producers obtain antimicrobials from their veterinarian with a prescription. In addition, animal feed can be formulated with an antimicrobial when there is a Veterinary Feed Directive (VFD) directing the feed mill to add a specific antimicrobial to the feed at a specific dose. VFDs require the existence of a veterinarian-client-patient relationship (VCPR). Simply put, a VCPR is established when a veterinarian examines an animal patient (or flock or herd), and there is an agreement between the client and the veterinarian that the veterinarian will provide medical care for the animal(s).
There also are a few antimicrobials that are available over the counter for food animals. Like other over-the-counter drugs, they can be used only according to the instructions on the label. Some states are going to bar the sale of these over-the-counter antimicrobials; California banned these starting in 2018. The FDA has announced these over-the-counter drugs will soon require a prescription, as detailed in the FDA's five-year plan, Supporting Antimicrobial Stewardship in Veterinary Settings Goals for Fiscal Years 2019 – 2023.
Q: How does antimicrobial use in animals differ from that in humans?
A: In human medicine, antimicrobials are approved for disease treatment and prevention, and physicians can prescribe and use antimicrobials without restrictions as to dose and duration of treatment. In veterinary medicine, antimicrobials used in food-producing animals are approved for disease treatment, control, and prevention. (See "How are antimicrobials used in animals?" for descriptions of these uses).
Antimicrobials, like all other drugs given to food animals, must be used according to approved label directions or according to federal regulations. In fact, many of the drugs shared by both human and veterinary medicine are restricted to a very specific veterinary use, dose, and duration, and can be administered to animals only by a veterinarian.
Q: How can an antimicrobial be a "growth promoter?"
A: Antimicrobials can change the balance of bacteria in an animal's intestine in such a way that it makes it easier for the animal to absorb nutrients. Medically important antimicrobials can no longer be used for growth promotion and feed efficiency as of January 1, 2017 when the Veterinary Feed Directive went into effect.
Q: I've heard the phrase "nontherapeutic use of antimicrobials"—what does that mean?
A: "Nontherapeutic" is a term that is used inappropriately by some groups to describe the use of antimicrobials in animals for disease prevention and other purposes. These groups feel that antimicrobials should only be used when animals show clinical signs of a disease. Neither the FDA nor the AVMA uses this term.
Q: How frequently are antimicrobials used in food production?
A: Lots of numbers have been thrown out there by various groups and the media, but the reality is that no one really knows. There is no mechanism to track the frequency of antimicrobial use in food production. What's more important are:
- judicious use of antimicrobials
- determining if the use of a specific drug is causing an impact on the development of resistance that is significant to animal and/or human health
Q: What's the bigger risk for causing antimicrobial resistance—antimicrobial use in humans or use in livestock?
A: This is a matter of debate. The simple truth is that no one really knows. It's common sense to think that both uses might contribute to the formation of resistance in some way, but risk assessments have shown that the use of antimicrobials in food production systems plays an extremely small role.1
No matter how small the risk, the AVMA wants to assure the judicious use of antimicrobials, which is why we support limiting their use to activities that prevent, control, or treat disease.
Q: Should I be concerned about antimicrobial resistance?
A: Of course you should—we should all be concerned about antimicrobial resistance.
The connection between specific antimicrobial uses in food animals, and foodborne or other human disease, remains unclear. Based on studies to date, the risk to people of becoming infected with resistant organisms by consuming animal products (meat, milk, eggs) is extremely low.
Veterinarians are concerned about the development of antimicrobial resistance in organisms that infect animals because it may compromise the effectiveness of antimicrobial therapy for animal diseases and make them harder to treat. Antimicrobials are needed for the relief of pain and suffering caused by bacterial diseases in animals as well as in people.
Q: Why can't we just stop using antimicrobials in food-producing animals?
A: Eliminating antimicrobial use in food-producing animals, or even placing more stringent restrictions on their use, would remove a valuable tool in the veterinarian's kit for preventing and reducing animal disease and suffering. Healthy animals mean healthy food products, and antimicrobials help veterinarians keep animals healthy.
Veterinarians and the AVMA support the judicious use of antimicrobials. What does this mean? It means that anyone using antimicrobials—whether in people, animals or the environment—should use good judgment and base this decision on maximizing good outcomes and minimizing the risk of resistance. If scientific research and risk-based assessments demonstrate that the use of an antimicrobial poses significant public health risks, we support the restriction or removal of its use. The FDA can remove a product or place additional restrictions on its use in animals if the product poses a public health risk. In 2005, the FDA did just that—announcing that the antimicrobial enrofloxacin could no longer be used in poultry because of an increased risk to public health. To date, there has not been any proof that currently approved antimicrobials pose a special public health risk.
The continued availability of safe, effective antimicrobials for veterinary medicine, including the retention of currently approved drugs and future approvals of new drugs, is critical to maintaining a safe food supply as well as preserving animal health and welfare.
Q: You say that banning antimicrobials could have negative effects on animal welfare. Why?
A: Animal welfare means the physical and mental state of an animal in relation to the conditions in which it lives and dies. One part of meeting an animal's primary welfare needs is to provide freedom from pain, injury and disease. Banning or severely restricting the use of antimicrobials in animals may reduce the veterinarian's ability to protect animal health and prevent suffering from disease, which can lead to poor welfare.
Q: You say that banning antimicrobials could have negative effects on the safety of our food. Why?
A: Healthy animals provide healthy food. Banning or severely restricting antimicrobial use limits veterinarians' ability to treat, prevent or control animal diseases. Treating, preventing, and controlling disease in food animals ensures that we have healthy animals entering the food supply, so the veterinarians who care for food animals need access to antimicrobials as part of their toolkit to combat disease. This is necessary to protect public health and is a judicious use of antimicrobials. Allowing the judicious use of antimicrobials to treat, prevent, and control disease in food animals lowers the risk of unhealthy animals entering our food supply.
Q: What are veterinarians doing to prevent antimicrobial resistance?
A: Veterinarians use pharmaceuticals, including antimicrobial agents, judiciously. It is important to recognize that veterinarians are trained professionals who know when antimicrobials are needed in animals and when they are not. We also work with food producers to keep the animals healthy with vaccination, parasite treatment, good nutrition and good management practices.
For more information on the AVMA's philosophy regarding antimicrobial use, read our policy on the Judicious Therapeutic Use of Antimicrobials.
Q: What are food producers doing to prevent antimicrobial resistance?
A: Keeping animals healthy is the main goal. After all, sick animals aren't allowed to enter our food chain. Strategies needed to keep animals healthy include vaccination, parasite treatment, good nutrition, and good management and husbandry to reduce stress and minimize the risk of disease.
It's also reasonable to expect producers to use antimicrobials judiciously. When producers use antimicrobials and other medications, they are required to follow the label directions, which include the amount of time the producer must wait after the last dose before either the animal or its milk can be used for food. That's called the withdrawal time, and during that period the animal's milk must be discarded and the animal cannot be slaughtered. The withdrawal times are based on how the body processes the medications. Observing them ensures that there are no drug residues in the milk or meat.
Q: What is the federal government doing to prevent antimicrobial resistance?
A: Antimicrobial use is regulated by the FDA. In addition to approving the use of antimicrobials in animals, the FDA also collects data on antimicrobial sales from companies and makes that information publicly available.
The National Antimicrobial Resistance Monitoring System (NARMS), established by the U.S. Department of Health and Human Services (HHS) and the U.S. Department of Agriculture (USDA), performs research and provides information about antimicrobial resistance in people, animals and retail meats. The USDA also funds research on antimicrobial resistance.
FoodNet is a foodborne illness surveillance network and is a cooperative effort of the U.S. Centers for Disease Control and Prevention (CDC), FDA, USDA and members of the Emerging Infections Program. The system collects information about foodborne diseases and related illnesses.
In September 2014, President Barack Obama signed Executive Order 13676 establishing the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria (PACCARB). The council provides advice, information, and recommendations to the secretary of Health and Human Services regarding programs and policies related to antibiotic resistance. These include both a national strategy and action plan for combating antibiotic-resistant bacteria.
Q: What can I do to prevent antimicrobial resistance?
A: One very simple thing you can do is to avoid requesting antimicrobials if you have the flu or a cold. Because colds and the flu are caused by infection with viruses, antibiotics won't help. Antimicrobials should only be prescribed if your physician feels they are absolutely necessary. If your physician determines that you need to be given antimicrobials, make sure you follow the directions and take the right doses at the right times for the right number of days as prescribed – don't skip doses, don't take antimicrobials prescribed for someone else, and don't save any antimicrobials for later use.
Similarly, trust your veterinarian to determine when and if your animals need treatment with antimicrobials.
Q: Doesn't Europe ban the use of antimicrobials?
A: The European Union does not have a ban on the use of antimicrobials – they have bans on the use of antimicrobials for the purpose of growth promotion. Sweden banned all growth promotants in 1986. Denmark instituted antimicrobial-specific bans in 1995 and 1998, and added a ban on all growth promotants in 2001. The Netherlands banned growth promotants in 2006, and the European Union banned one growth-promoting antimicrobial in 1997 and others in 1999. The European Union uses the same definitions of antibiotic use as the AVMA and FDA (see "How are antimicrobials used in animals?") and does not allow growth promotion. The other uses of antibiotics – prevention, control, and treatment – remain, with greater flexibility for the veterinarian to determine doses than the FDA allows for veterinarians in the United States.
Q: What is PAMTA?
A: PAMTA is the Preservation of Antibiotics for Medical Treatment Act, a proposed federal law (H.R. 1587). Its stated purpose is to preserve the effectiveness of medically important antimicrobials used to treat human and animal diseases by eliminating so-called "nontherapeutic" use of antibiotic drugs considered important for human health. The bill defines "nontherapeutic use" as the use of a drug administered in an animal's feed or water in the absence of any clinical signs of disease for the purpose of growth promotion, improved feed efficiency, increased weight gain, routine disease prevention, or other routine purposes. It seeks to restrict the use of many classes of antibiotics—including penicillins, tetracycline, macrolides, lincosamides, streptogramins, aminoglycosides, sulfonamides or any other drug or derivative of a drug that is used to prevent, control, or treat disease or infection in people.
Q: Does the AVMA support PAMTA? Why or why not?
A: No, the AVMA does not support PAMTA. Although PAMTA may seem simple at first glance, we don't support broad bans that aren't based on science – and this one isn't. Banning the use of these antibiotics before science-based studies and risk-based evaluations show if there is an actual risk to human health would harm animal health and could put food safety at risk.
This ban would be much more restrictive than Denmark's ban, as it would eliminate two or three of the four currently approved uses of antibiotics. It would allow antimicrobial use only for treatment purposes, which would mean antimicrobials could only administered after an animal has become physically ill and its health and welfare have been compromised. Another critical, and often overlooked, difference between Denmark and the United States is the flexible drug labeling system used by EU countries. In the United States, drugs are approved by the FDA at specific doses for specific uses—a drug might be labeled with one dose for growth promotion, a second for prevention, a third for disease control and a fourth for disease treatment. In the EU, drugs are labeled with one wider range of accepted dosages, allowing more flexibility in dosage selection.
Neither the Netherlands' nor Denmark's antimicrobial ban has resulted in decreased antimicrobial resistance in humans. In addition, a study performed in the Netherlands concluded that the therapeutic use of antimicrobials in food animals nearly doubled in the decade after the ban took effect. One of the likely factors in that increase is the ban on the use of antimicrobials for growth promotion.2,4
Q: What is the solution?
A: Antimicrobial resistance doesn't happen overnight, and neither does the solution. First and foremost, we need more discussion, more research, and more risk-based analyses. We need more data to determine the risks and the best measures to reduce or eliminate those risks while also weighing the benefits of antimicrobial use. This includes science- and risk-based evaluation of antimicrobials to determine their appropriate use and/or continued approval.
Following the best available methods for managing food-producing animals, with continual evaluation and improvement when possible, keeps animals healthier and decreases the need for antimicrobials.
Collaboration and coordination among government, food producers and other stakeholders is vital. Everyone should take responsibility for the part they may play in the development of antimicrobial resistance and take steps to address it.
We also need more veterinarians working in food supply veterinary medicine, making sure our food is safe from farm to fork.
- Claycamp HG, Hooberman BH. Risk assessment of streptogramin resistance in Enterococcus faecium attributable to the use of streptogramins in animals. Virginiamycin risk assessment. Rockville, Md: FDA Center for Veterinary Medicine, 2004.
- DANMAP 2008. Use of antimicrobial agents and occurence of antimicrobial resistance in bacteria from food animals, foods and humans in Denmark. Copenhagen: Danish Veterinary Institute, 2008. Available at www.danmap.org.
- EuroStat, European Commission. Agricultural statistics: main results—2006–2007. Luxembourg: Office for Official Publications of the European Communities, 2008. Available at: http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-ED-08-001/EN/KS-ED…. Accessed Feb 19, 2010.
- MARAN 2007. Monitoring of antimicrobial resistance and antibiotic usage in animals in the Netherlands in 2006/2007. Available at: www.cvi.wur.nl.
|
To put it simply programming essentially is telling the computer what to do. You give it a list of instructions – do X, if you can’t do X then do Y etc.
It’s usually done in a written form using a programming language. It’s a bit similar to a normal language like English – it has certain rules for doing various things. For example, you may have a rule in to end a sentence with a punctuation or some other symbol. In a programming language you may have a rule to end an instruction with a semicolon.
A question that may come up in your mind is why programming is sometimes considered to be difficult? The thing is that unlike humans, a computer is very literal minded. It will do everything you ask and nothing else which is something some people like about them. But this also does mean that you have to instruct everything in detail. As an analogy let’s say you asked a friend to make you a sandwich. That wouldn’t be a problem. He would just make you one. If there wouldn’t be any bread, he would figure something out – either inform you about it, or just go and buy some. However, a computer wouldn’t act that way, if you would tell it to make a sandwich and that’s it, it wouldn’t have any idea on what to do if there’s no bread. You would have to explicitly instruct it to go to the shop. Sounds simple, but what if there’s no bread in the shop either? Then you have the same problem again. You have to now tell it to come back and inform you about that. This is just a simple analogy, but I think you get my point. Given a complex enough task, you can end up having a big net of various conditions and actions.
You now may end up wondering why should you bother with programming in the first place. The thing about this is that once you write a program and it works, the computer can do it any number of times with a huge efficiency so it does pay off when you need to do a specific task a substantial number of times.
|
By Derek Markham
One of the big limitations in many electronics devices, from mobile gadgets to electric cars, is energy storage, or the capacity of the battery and the time it takes to recharge it. Supercapacitors, which are considered to be the future of high-power energy storage, could change all of that, by enabling rapid and effective charging of devices, but because of high costs and difficulty in manufacturing some of the components, we aren’t seeing wide adoption of the technology, at least for consumer devices.
However, a breakthrough process developed by scientists at the Oregon State University (OSU) may enable supercapacitors to be produced much cheaper, and in larger quantities, by using cellulose from trees to make high-quality carbon electrodes for the devices.
“We’re going to take cheap wood and turn it into a valuable high-tech product.” – Xiulei (David) Ji, assistant professor of chemistry at OSU
The team of chemists at OSU has discovered a simple process, using a basic reaction, that can turn the cellulose from trees into “nitrogen-doped, nanoporous carbon membranes” for use as electrodes in supercapacitors. By heating cellulose, said to be most abundant organic polymer on Earth, in a furnace in the presence of ammonia, this single-step reaction yields a quick and inexpensive process for producing one of the building blocks of the energy storage devices.
Supercapacitors aren’t the only application for these nanoporous carbon membranes, as the material is also used in water treatment and environmental filtering, so this new process could decrease costs and increase adoption of other technology which requires it.
“There are many applications of supercapacitors around the world, but right now the field is constrained by cost. If we use this very fast, simple process to make these devices much less expensive, there could be huge benefits.” – Ji
Another benefit of the process is that it is said to by “environmentally benign”, with the only byproduct being methane, which can be used as a fuel or for other industrial purposes.
|
For hungry bats, maybe it’s not all about how the moths actually taste. When you hunt by echolocation—sending out clicks and following the echoes—it helps to have prey who can’t hear you, like the many earless moths hugging our lightbulbs around the world.
New research, however, explains how these moths still manage to elude their predators without hearing: with minuscule, muffling fur that locks the clicks in and prevents them from echoing back to their hungry sonars. Thomas Neil, of the University of Bristol, calls the system “acoustic camouflage” in a forthcoming study, presented this week at a conference of the Acoustical Society of America.
To measure the fur’s sonic absorption capabilities, Neil and his team sent pulses of ultrasonic frequencies—sounds too high for human ears to hear—out to target moths through a loudspeaker. A microphone next to the speaker captured the resulting echoes and measured their strength. The team repeated this process from hundreds of angles for 10 moths across two species, measuring how different parts of the body absorbed sound to different degrees. The thorax wound up being the MVP, its fur absorbing up to 85 percent of the sound thrown at it. The team calculates that, without that thoracic fur, moths would have a nearly 40 percent higher risk of being found out. It has evolved to be strikingly more effective than butterfly fur, which can capture at most 20 percent of the sound that hits it (butterflies and moths don’t often cross paths).
It’s not clear, writes Neil in an email, why only some moths have evolved to be earless, nor is there a clear geographic split between them and their eared cousins. Neil and his team are currently attempting to profile the furriness levels of a variety of moth species (there are some 160,000 species altogether), and they have so far made a pair of telling discoveries. First, diurnal moths who don’t have to worry about bats tend to have less fur than their nocturnal counterparts—but among them, even some of those who can hear have grown thick coats of defensive fur, underscoring its crucial survival value.
We have much to learn from these moths for our own sonic purposes. Absorptive moth fur could provide a useful model for developing “sound insulating technology,” as Neil puts it. Their fur, he says, at least matches the capabilities of many existing technical sound absorbers, testifying to nature’s ability to build technological marvels on its own. Don’t expect mothscaping to take off any time soon.
|
Naming chemical compounds is hard at first but you will eventually get the hang of it. You might even become a master of naming chemical compounds!
I will first teach you how to name chemical compounds then I will give you a practice quiz afterwards.
Step 1. Identify Which Type the Compound Belongs
- Binary Acids – hydrogen and a halogen (HF, HCl, HBr and HI only!)
- Ionic Compounds – metal and a nonmetal
- Covalent Compounds – both nonmetals
- HCl is a binary acid
- NaCl is an ionic compound
- CO2 is a covalent compound
Step 2. Name the Compound According to These Steps
Naming Binary Acids
- Write Hydro + root of second element + ic for the first word
- Write acid for the second word
- HCl is Hydrochloric acid
- HBr is Hydrobromic acid
Naming Ionic Compounds
2. Use the subscript of the second element or group of elements as the charge of the first element
Therefore, the first word in the name of Fe2O3 is Iron(III).
5. Use the subscript of the first element as the charge of the second element or group of elements
In this case, the subscript of Fe is not shown. That means the subscript is one and oxygen has a charge of -2 or O2-.
6. Look up the name of the second element or group of elements in a Table of Anions
O2- is oxide
7. Write the name of the anion as the second word
Naming Covalent Compounds
- CO2 is Carbon dioxide
|
If a doctor suspects that an individual may have schizophrenia, he or she will take a complete medical and psychiatric history of the patient and conduct a series of tests to help him or her come to a proper diagnosis of the disorder. Most of the time, schizophrenia diagnosis will include a series of lab tests to determine whether there are other conditions or problems which may be causing the symptoms as well as a complete psychological evaluation to determine the emotional or mental status of the patient.
People who suffer from schizophrenia may hear voices or suffer from delusions. It is very common for people with schizophrenia to behave erratically and unorganized. Often times, a family member or loved one will be the first to recognize that an individual has a problem that is interrupting his or her daily life or routines. This is sometimes what brings the individual to forth to a doctor or healthcare professional such as a psychiatrist for further diagnosis and treatment.
Criteria for Schizophrenia Diagnosis
In order for a person to be diagnosed with schizophrenia he or she must exhibit certain criteria as set for in the Diagnostic and Statistical Manual of Mental Disorders (DSM). This is a manual that has been published by the American Psychiatric Association and is the handbook or foundation for evidential diagnosis of mental health conditions including schizophrenia as well as many other disorders.
Before an individual can be diagnosed with schizophrenia the psychiatrist or healthcare professional must first rule out other mental health disorders and determine that the symptoms that are being exhibited are not the result of substance abuse. Medications and various medical conditions are sometimes responsible for creating symptoms which may resemble schizophrenia so the healthcare provider must also rule these potential causes out before making a final diagnosis of schizophrenia in the patient.
The patient must have at least two symptoms of schizophrenia or exhibit negative symptoms of schizophrenia for a significant number of days in a particular month. The symptoms that a doctor or psychiatrist will be on the lookout for include:
- disorganized behavior
- disorganized speech patterns
- catatonic behaviors
If any of the following negative symptoms of schizophrenia are present during a significant amount of time in a single month there is a potential for schizophrenia diagnosis:
- lack of movement when speaking
- speaking in monotone
- showing no real signs of pleasure in day to day life
- inability to begin activities that were planned
The symptoms of schizophrenia, with the exception of negative symptoms, must be present for a period of at least six months in order for a diagnosis to be made. The individual must also be experiencing impaired abilities in the performance of daily tasks or at work, home or school.
It could take a doctor quite some time to formally diagnose schizophrenia as he or she must first rule out all other medical conditions, the potential for substance abuse being the cause of the symptoms and various other factors before coming to a final conclusion.
|
The annual ozone hole started developing over the South Pole in late August 2009, and by September 10, it appeared that the ozone hole of 2009 would be comparable to ozone depletions over the past decade. This composite image from September 10 depicts ozone concentrations in Dobson units, with purple and blues depicting severe deficits of ozone. The image was made from data collected by the Ozone Monitoring Instrument onboard NASA’s Aura satellite.
“We have observed the ozone hole again in 2009, and it appears to be pretty average so far,” said ozone researcher Paul Newman of NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “However, we won’t know for another four weeks how this year’s ozone hole will fully develop.”
September 16 marks the International Day for the Protection of the Ozone Layer, declared by the United Nations to commemorate the date when the Montreal Protocol was signed to ban use of ozone-depleting chemicals such as chlorofluorocarbons (CFCs).
Scientists are tracking the size and depth of the ozone hole with observations from the Ozone Monitoring Instrument on NASA’s Aura spacecraft, the Global Ozone Monitoring Experiment on the European Space Agency’s ERS-2 spacecraft, and the Solar Backscatter Ultraviolet instrument on the National Oceanic and Atmospheric Administration’s NOAA-16 satellite.
The depth and area of the ozone hole are governed by the amount of chlorine and bromine in the Antarctic stratosphere. Over the southern winter, polar stratospheric clouds form in the extreme cold of the atmosphere, and chlorine gases react on the cloud particles to release chlorine into a form that can easily destroy ozone. When the sun rises in August after months of seasonal polar darkness, the sunlight heats the clouds and catalyzes the chemical reactions that deplete the ozone layer. The ozone hole begins to grow in August and reaches its largest area in late September to early October.
Recent observations and several studies have shown that the size of the annual ozone hole has stabilized and the level of ozone-depleting substances has decreased by 4 percent since 2001. But since chlorine and bromine compounds have long lifetimes in the atmosphere, a recovery of atmospheric ozone is not likely to be noticeable until 2020 or later.
Visit NASA’s Ozone Watch page for current imagery, data, and animations of the year to date.
- Antarctic Ozone Hole: 1979 to 2008
- Climate Change and Atmospheric Circulation Will Make for Uneven Ozone Recovery
- New Simulation Shows Consequences of a World Without Earth's Natural Sunscreen
- Ozone Day 2009
- What’s Holding Antarctic Sea Ice Back From Melting?
NASA image courtesy Ozone Watch. Caption by Michael Carlowicz.
- Aura - OMI
|
A section of the Roman city-wall of Empuries, Spain. 1st century BCE. The base of the wall was made using calcareous rock while the upper portion is of Roman concrete (opus caementicium). / Photo by Mark Cartwright, Creative Commons
Figure 1: Pantheon (Google images)
The purpose of this article is to inform readers about topics necessary for understanding Ancient Roman concrete. Concrete preforms vital roles in nearly all aspects of public works including infrastructure systems and buildings. Versatility, strength, and workability make concrete a universal construction material that has become a foundation to everyone’s daily lives. The Ancient Romans first discovered this technology over 2000 years ago. Ancient Roman knowledge and skill with concrete can be observed first hand through massive concrete structures that stand today such as the Pantheon (Rome, Italy 126 AD, Figure 1). In today’s environment, massive engineering feats like the Three Gorges Dam (Hubei province, China 2012, Figure 2) or the Burj Khalifa Tower (Dubai, UAE 2009, Figure 3) exemplify the substantial abilities that modern concrete has. Concrete is now the building blocks for modern civilization.
Figure 2: Three Gorges Dam (Google images)
Most of the knowledge of Ancient Roman engineering comes from a person known as Vitruvius. He lived somewhere around the time of 80 BC to 15 BC. He wrote a substantially detailed work known as the Ten Books on Architecture that survived the multiple sacks that Ancient Rome experienced. The majority of today’s knowledge on Ancient Roman engineering comes from Vitruvius’s work.
Figure 3: Burj Khalifa Tower (Google images)
Concrete was never a single scientific discovery. It developed slowly through a long process of trial, luck, and keen observations. Concrete technology actually advanced overtime on two separate occasions. The Ancient Romans had developed consistent concrete technology around the start of the Roman Empire in 42 AD. However, starting in the 3rd century AD, the fall and decline of the Roman Empire forced the knowledge of concrete to be forgotten until the late 18th century. After its rediscovery, the redevelopment process continued again into what we consider modern-day concrete.
Background and Definitions
Before further discussion on ancient and modern concrete, a few clarifications must be made on the definitions and [[#|meanings]] of the various words associated with concrete. The following section will provide insight including cement, hydration, and pozzolan, elements that will prevail throughout the article.
Cement vs. Concrete
Modern concrete is essentially artificial rock that is created through the mixing of cement (which acts as the [[#|bonding agent]]), water, and aggregate. Modern cement is generally known as Portland cement, the primarily consumed cementing material worldwide. Aggregate refers to anything the builders put inside their [[#|concrete mix]] besides cementing materials. This filling material usually includes other course and rough rocks such as sand and gravel of varying sizes. The bonding and strength developments during curing, when the concrete hardens, due to a process called hydration. During hydration, chemical changes take place and the aggregate particles are essentially glued together to form an artificial rock. Although hydration sounds like the addition of water, the process actually occurs while the concrete cures and ironically the concrete does indeed “dry-up”, in the sense that it transitions from a wet rocky-paste into an actually solid structure (PCA, 2013).
Hydraulic Cement vs. Non-Hydraulic Cement (aka Mortar)
Hydraulic cement is defined as any cement that undergoes hydration (reaction mentioned above) to cure and harden into a desired structure. All Portland cements are hydraulic cements. Hydraulic cements are sometimes referred to as being “water resistant” because they can cure in wet or submerged environments and do not deteriorate with contact with water. This property attributes itself to the cement’s reaction with water that actually initiates the curing into a solid structure; in fact, prolonged contact with water allows the concrete to continue to increase in strength. On the other hand, non-hydraulic cement, referred to as mortar, is a lime based paste that hardens through a reaction with the CO2 in the atmosphere. Thus, when exposed to water as a paste, mortar will be unable to cure and harden into a solid mass.
Modern vs. Ancient Pozzolan
Modern pozzolans, or pozzolanic materials, include any supplementary cementing materials that contribute to hardening due to the hydration of Portland cement. Pozzolans can include the following (PCA, 2013):
- Natural pozzolanic ash: naturally produced volcanic ash buried relatively close to the surface.
- Fly ash: ash produced from the combustion of coal.
- Blast-furnace slag: nonmetallic silicate (rock like material) separated from metals during smelting or refining.
- Silica fume: ultrafine silica powder that is a by-product of elemental silicon and ferrosilicon production.
In Ancient Rome, Vitruvius eludes that they used two materials with pozzolanic properties. The first one, Vitruvius referred to as pitsand. Pitsand was the natural sand-like pozzolanic ash that was found by digging large open holes close to the surface. Pitsand is also known as pozzolana because it was found in great abundance in areas surrounding the city of Pouzzuoli (or Puteoli). The second pozzolan material the Ancient Romans used was crushed brick, which was burned clay. Although today we consider both burned clay and natural pozzolanic ash as pozzolonic materials, in Vitruvius’s writing, pozzolana only refers to the volcanic ash known as pitsand. In the following sections, pozzolana will refer to the volcanic ash pitsand.
History of Cement
The following section will give a general overview of major events and steps in ancient and modern time that led to the construction material we consider today as cement. Cement’s history involves ancient development, a technological regression and forgotten knowledge, and its modern rediscovery.
Ancient Development of Cement
Figure 4: Typical early pozzolan-lime-rubble concrete wall from Pompeii 3rd century BC (Pompeii)
One of the first ancient building materials was clay. Clay’s surface and shallow lithosphere abundance combined with its workability and cohesive properties made clay a simple and primitive building material. Prehistoric clay could be used in three ways: walls could be constructed with the raw, earthy material by compacting piles either by hand or with wood planks. Secondly, clay could also be mixed with rocks and compressed with temporary wood planks; walls of this nature from 200 BC still exist today in Spain (Moore, 1995). The previously mentioned use of clay parallels methods used in both ancient and modern concrete placing methods. Namely, compressing mixed materials and using wood boards to retain forms. Lastly, sundried clay bricks could be stacked and layered; Moore tells that the first evidence of clay bricks can be seen in excavations as far back as 8000 BC in the Middle East and 4000 BC in Iraq. Walls, dwellings, and other structures could be constructed by shaping and sun-drying clay and then stacking them on top of each other. Eventually clay was burnt in large kilns as opposed to sun-drying. Kilns are oven-like structures meant to burn clay bricks and lime that were eventually used by the Ancient Romans to create stronger bricks.
The introduction of lime mortar was the next major step in the development of ancient concrete. Limestone is a sedimentary rock composed of mostly small grains of settled marine skeletal structures such as coral or tiny shelled organisms. Limestone could be burned in large kilns to produce quicklime. Quicklime could then be mixed with water, a process known as slaking, to produce a pasty-like mortar material through a chemical reaction (discussed in detail a later section). This lime mortar can be seen being used in walls as far back as 2000 BC in central India. Archeologists also uncovered early use of lime mortar in the construction of Minoan foundations in Prehistoric Greece, Crete 1700 BC. It was discovered that many foundations were typically composed of previous structures and buildings that had collapsed from earthquakes. Lime mortar would be incorporated into the remaining rubble to solidify and develop the foundation for the next building that would be constructed on top of the rubble of the previous (Moore, 1995).
Lime mortar was used by Romans as both a construction material as well as a plaster meant for aesthetic relief due to its brilliant white finish, appearing much like marble. Vitruvius tells that this plaster was known as stucco and was typically a combination of water, lime, and various types of sands. Powdered marble would be added for a more brilliant white finish. The exact date that Romans came to their understanding of lime remains disputed and largely unknown; however, lime based mortar was prevalent in the end of the 3rd century BC (Adam, 1994).
The next significant advancement in Roman lime-mortar was the addition of crushed tiles and brick, composed of burned clay. With the burned clay-lime mix, Romans had discovered their first hydraulic cement. Romans used this water resistant mortar, known by the Ancient Romans as opus signinum, to line water infrastructure such as the many aqueducts, used to carry water over large distances into the city, and the many castella, large cisterns used to hold, filter, and distribute water throughout the city (Vitruvius, c. 15 BC).
Figure 5: Looking into a typically advanced broken Ancient Roman wall (c. 42 BC (Ostia Antica)
The most significant change in Ancient Roman lime mortar was the accidental addition of pozzolana, the sand-like volcanic ash known as pitsand to the Ancient Romans. Lime mortar was often mixed with sand. In fact, Vitruvius accounts of three types of sand, each with varying properties. These included river sand, marine sand, and pitsand, sand that was excavated from the ground in pits surrounding Naples. These sands frequently were interchanged in the lime mortar and eventually, the Ancient Romans realized the pitsand, actually strengthened the mortar and allowed for underwater curing. With this, Romans had discovered their second form of hydraulic cement. The pozzolana-lime based cement was used to construct large maritime and structural works that needed the upmost highest bonding quality.
The first account of this pozzolana-lime cement was in Pompeii dating around 3rd century BC. Archeologist found a wall constructed of poor quality cement and rubble mixture as seen in Figure 4. Rubble refers to a mixture of rocks and stones of varying size and shape that makes up the aggregate, or filler, in Ancient Roman concrete. In 199 BC, Puteoli harbor works was an early example of good quality hydraulic cement. It is considered good quality because the structures remain today after surviving the harsh conditions of the sea. The geographic location both Pompeii and Puteoli had abundant sources of pitsand deposits which explain why the first pozzolana-lime hydraulic cements arrived in those areas. It took about a century before the Ancient Romans made this connection and started using the pozzolana-lime cement in the city of Rome. By the Augustan period and the start of the Roman Empire in 42 BC, guided processes and construction practices had been refined and standards were well established (Figure 5). This was confirmed through observations of the detailed similarities between buildings arising in that period. For example, the temple of Saturn and the temple of Divus Julius had standardized concrete foundation and wall construction (Moore, 1995). By this time, Ancient Roman hydraulic cement, both pozzolana-lime based and crushed brick-lime based, was fully developed.
Technological Regression and Forgotten Knowledge
Following the Hadrina period (138 AD), most of the great Roman infrastructure had been completed. Over the course of the next century, damages sustained from earthquakes, flooding of the Tiber River, and fires slowly accumulated and were left mostly unrepaired due to economic failure and poor leadership. By the 3rd century AD, the decline and fall of the Roman Empire initiated and ran its course over the next few centuries. The exact causes are heavily debated and it can be assumed that a combination of many complex factors were involved: economic failure involving a conquest based economy and a large unsustainable army; environmental degradation of natural resources and over harvesting; a declining population due to reducing water supply and spreading disease; barbarian military advancement and growing invasion pressures; poor leadership and political crisis. Edward Gibbon, an English historian (1737-1794), even theorized that the fall of the original Roman paganism and the growth of Christianity transformed social outlook on worldly living due to the promises of heaven. Regardless of the causes, the declining empire extinguished engineering advancement and opened the door to a period of technological regression on hydraulic cement and concrete.
Three direct causes led to losing the knowledge of hydraulic cement. Firstly, the poor economic state and lack of funding halted major construction projects. With little construction occurring for over a century, the demand of knowledgeable craftsmen and contractors vastly decreased. Secondly, the barbarian sack of 410 AD caused the few remaining craftsmen and contractors to flee to the countryside. Once out of the city, these families continued on substance living in which knowledge of hydraulic concrete quickly became unwarranted. Lastly, as the Middle Ages progressed, political and economic focus moved away from Rome and into Northern European cities such as London, Paris, and Cologne. Thus, the natural pozzolanic ash that was vital for the hydraulic cement was geographically absent (Moore, 1995). As a result, the knowledge of hydraulic cement was thus forgotten for over a millennium.
Modern Rediscovery and Advancement of Cement
Although natural pozzolana and volcanic ash deposits lay buried in southern Italy, limestone was very abundant throughout the world. As a result, lime-based mortar mixed with rubble and brick remained a primary construction material throughout the Middle Ages. It was not until 1756, when an English civil engineer named John Smeaton was tasked with the rebuilding of the Eddystone lighthouse that hydraulic lime cement was rediscovered. Smeaton experimented heavily on lime with many different admixtures and eventually discovered a hydraulic lime by combining clay with quicklime (the product of burned lime discussed ina later section). The clay Smeaton used contained multiple impurities that shared similar chemical compounds with the pozzolanic ash the Romans used. With this, Smeaton had developed the first hydraulic cement in over a millennium.
Smeaton’s clay-lime cement opened to the door to the advancement of modern cement. In 1796, James Parker, a cement manufacturer, showed that by grinding the burned lime into powder, the gel making process was greatly accelerated and improved. The finely powdered form of quicklime and clay has a large surface area to volume ration and thus increases the total surface area in which hydration can more readily take place with the additions of water. Since then, it has been common practice to produce cement in a finely powdered form that can be mixed with water and hydrated on site.
The effects of pozzolanic ash when combined with lime mortar were rediscovered by a French laboratory researcher who focused on construction. C.J. Vicat experimented on a variety of chemical processes with lime. One of the materials Vicat used was volcanic ash from regions surrounding southern Italy. In doing so, Vicat was able to observe the chemical reactions that allowed lime mortar to harden and strengthen underwater, thus rediscovering the pozzolanic effects of volcanic ash in hydraulic cement.
The final step in the development of what we consider modern cement was the establishment of Portland cement. In 1824, Joseph Aspdin combined clay and limestone particles through a heating process known as sintering. Sintering involves heating the materials to a special point. This allows the diffusion of calcium oxide particles into the silica molecule structure, effectively fusing the particles together. Sintering essentially melts and fuses the edges of the particles together without melting the core of the particles (Moore, 1995). This material, known as clinker, is later grounded into a powder and packaged for transportation and storage or use. At this point, Aspdin coined Portland cement in name of the grey-colored stone found on the Isle of Portland, England.
Reaching the 20th century, high temperature heat treatment of the various cement components and fundamental chemistry on clay impurities and their effects was understood by the many cement manufacturers. At this point, manufactures were at the beginning of using various supplementary materials to modify the characteristics and performance of specific types of cement. With this, Portland cement was a well-established construction material and modern concrete was implemented in multiple engineering projects in most of the developed world.
Modern cement is produced through an industrialized process. The main raw materials are clay and limestone that has been mined and grounded into very fine powder. The powdered raw materials are mixed together in specific proportions and heated to 2600F in a kiln to produce small chunks of a material called clinker. The clinker is then grinded into powder and a mineral used to control setting called gypsum is added. The mixture is then sealed in moisture proof bags to be stored and transported to the construction site. Once on site, the cement is mixed with water and aggregate and then placed into the various forms demanded by the structure. Hydration occurs with the addition of water that bonds the cement gel and aggregate together. With this, concrete is formed as the mass strengthens and hardens into an artificial, rock-like structure.
Reactions in the Kiln
Summarized below in Table 1 are common compounds involved with cement chemistry. Note that the clay and limestone rows neglect other minerals in the materials and only correspond to the raw materials that directly contribute cement production. Additionally, the reactions that take place in cement production and hydration require long chemical formulas. Thus, cement chemists take on a new abbreviation scheme to simplify reactions known as Cement Chemistry Notation (CCN). Many compounds below will be reference throughout the remainder of this article.
Decomposition of Raw Materials
At temperatures up to about 2300F, the raw materials are broken down and chemically react with lime to create intermediate compounds used to form the final clinker product.
- At 1000F, water is removed.
- At 1750F, calcination occurs and limestone loses its CO2. The gas is driven into the air, leaving the lime in a highly reactive state.
- As lime is produced, belite is produced. Aluminate and ferrite phases also start to form.
- At intermediate temperatures, sulfates combine with calcium to form a sulfate liquid phase. (Winter, 2013)
As temperatures reach above 2300F, aluminate and ferrite phases liquefy. These liquid phases (including the sulfate phase), contribute to ion mobility and promotes fusing, at high temperatures. However, these intermediate phases separate and are not present in the final clinker material (Winter, 2013).
- In addition to liquid phases, some liquid belite is included as well as dissolved lime and silica compounds.
- At 2550F, belite is transformed into alite. Additional alite is formed from the free lime and silica.
- Alkali sulfate, and some akali chloride, liquid phases evaporate and are passed back up the kiln process and again condense to liquid in cooler sections. The alkali phases continue to recycle and aid in the formation of other compounds.
Cilnker Cooling and Processing
After the formation of alite, the clinker is taken away from the burner and cooled. During cooling, the main liquid phases (aluminate and ferrite) crystalize. The gypsum is then added to the clinker and the mixture is grinded into the desired particle size. At this point, the cement is sealed in moisture proof bags and ready for hydration.
Three main reactions take place during hydration of the cement (Winter, 2013).
- Sulfates and gypsum minerals immediately dissolve to produce alkaline, sulfate-rich solutions. This occurs minutes after water is added.
- Aluminate (C3A) forms aluminate-rich gel when contacted with water. This gel in turn reacts with the sulfate solution produced from the previous reaction to form small rod-like crystals called ettringite. This reaction causes the paste to go dormant, a time in which the paste is most workable. At this time, the cement-aggregate paste is placed into the position it will cure. The dormant stage generally lasts a few hours but workability decreases as the paste stiffens.
- Alite (C3S) and belite (C2S) start reacting with the water to form calcium silicate hydrate (C-S-H) and calcium hydroxide (CH).
Moore (1995) explains the hardening and strength gain during hydration to the diffusion, surrounding and expansion into pores on the molecular level, of the C-S-H gel into the components of the remaining cementing materials, namely CH, and the aggregate adjacent to the curing cement. The C-S-H gel then hardens into a lattice of very small interlocking fibers and plates. This is the primary process that results with the strength gain and curing into a concrete structure.
Strength that builds up from the C-S-H occurs during the third stage and can take place over long periods of time, depending on the rate that all the C3S and C2S is hydrated. Generally, 90% strength gain is usually observed within 28 days of hydration; however, in the right environments, concrete can continue to gain strength over time. Gotti et al. (2008) determined that a replicated, Vitruvian-formulated, hydraulic concrete with a six month cure time had a compressive strength of about 670 psi. They also tested original Ancient Roman hydraulic concrete that was dated to the 1st century as having a compressive strength to 1160 psi. The Ancient Roman sample has had over 2000 years to hydrate.
Ancient Roman Concrete
[LEFT]: Figure 6: Opus incertum is composed for varyous rock chunks mixed with cement. Notice that there is no real brick facing. Opus incertum was one of the earlier concrete construction methods (Ostia Antica).
[RIGHT]: Figure 7: Opus reticulatum was taped rectangular prisms that were placed on the outside of a wall with cement. Concrete would then my placed inside the brick work (Ostia Antica).
Ancient Romans created hydraulic cement with the combination of lime, pozzolan, and water. Vitruvius conveys the mix proportions used for varying structural functions. For example, harbor works and bridge piers would need the highest quality volcanic ash pozzolan found in Naples and a mix of two parts pozzolan and one part lime was required. Mortar, used for either brick or concrete cement, could be made from either three parts volcanic ash with one part lime or with a combination of one part lime, one part crushed brick, and two parts standard sand. Construction functions such as flooring surfaces and aqueduct/cisterns that required less structural demand are mixed with pozzolan-lime rations of 5:2, thus having less lime, and in turn less calcium, in the mix. Less calcium would result with a smaller quantity of C-S-H gel that attributes strength to the concrete. The Ancient Romans realized this and conserved the relatively expensive lime and uses high proportions only when necessary.
[LEFT]: Figure 8: Opus testaceum was a brick pattern that arrived later. Again, the outer brick work would be laid, then the concrete would fill in the open space (Ostia Antica).
[RIGHT]: Figure 9: Combination of bricks for quality control during construction and potentially aesthetic reasons (Ostia Antica).
Vitruvius explains that the Ancient Romans would layer and compact the hydraulic cement with rubble between brick work to construct their structures. Figures 6, 7 and 8 all exemplify the Ancient Roman concrete work as well as the various common brick patterns they used: opus incertum, opus reticulatum, and opus testaceum respectively. Note that the occasional strip of flat rectangular brick that is uniformly spaced seen in Figures 9. These were used to occasionally ensure level and uniform brick placing.
[LEFT]: Figure 10: Stucco outer covering with concrete rubble on the inside (Pompeii).
[RIGHT]: Figure 11: Stucco layered over the brick work that conceals concrete rubble. The outer most layer of stucco differs slightly with the addition of powdered marble and the application of a Fresco (underground Ancient Roman home).
Additionally, the brickwork could later be coated with a white plaster for aesthetic relief, known as stucco, or to be a canvas for a fresco, a type of mural painting. The stucco was composed of three layers of varying amounts of fine sand with pasty slaked lime. The final layer to be viewed could also be mixed with powdered marble for increased brilliance. Figure 10 and 11 display the many layers of stucco applied to walls as well as a section of a fresco.
The Ancient Romans got their lime source (also known as quicklime, CaO) from raw limestone through a process called calcination. The limestone is heated to 1700F where water and CO2 gas is released leaving CaO. At this point, the Ancient Roman cement process deviates from that of modern methods. The CaO, in a chalky rock form, is mixed on site with water, a process known as slaking. During slaking, moderate amounts of heat is released as the lime is hydrated to produce calcium hydroxide, or slaked lime (Ca*2OH and CH in Cement Chemistry Notation). The slaked lime paste at this time is a non-hydraulic mortar that will only cure when exposed to the atmosphere.
Non-hydraulic lime curing and hardening occurs through a process known as recarbonation, essentially the opposite reaction as its formation. As the atmosphere evaporates the water, Ca*2OH becomes CaO again. The final curing reaction occurs as CO2 reenters the paste and combines with CaO to produce CaCO3. The creation and use of non-hydraulic mortar can be summarized as making raw limestone into a workable paste that will turn back into limestone material through a mirrored reaction after being shaped to its designated function.
The Romans used the lime mortar primarily for what Vitruvius referred to as stucco work. He explained that mixing the slaked lime with sand and sometimes powdered marble would produce a durable and brilliantly white finish when applied in layers on any wall, vault, or ceiling. Large murals, known as frescos would later be applied for aesthetic and artist expression.
[LEFT]: Figure 12: Geography of Rome.
[RIGHT]: Figure 13: Geography of Naples. Notice the large circular element just east of Naples. The object is Mt Vesuvius, the major source of volcanic ash on the lowlands.
Moore (1995) has also summarized the chemical make-up (percentages) of the pozzolan materials used by the Ancient Romans in Table 2. Notice the similarities between the ancient pozzolan materials: silica (SiO2), alumina (Al2O3), ferric oxide (Fe2O3) and lime (CaO).
The volcanic ash and burnt clay provides the amorphous silica (a non-crystalline, more reactive state of silica), alumina, and ferrite materials required for the cementing processes. When combined with hydrated or slaked lime (CH), conditions allow the materials to hydrate, thus creating a strong bond with the aggregate.
Comparison between Ancient and Modern Concrete
[LEFT]: Figure 14: Showing examples of porous rocks (dark) and tuff (lighter) commonly used in aggregate (Pompeii).
[RIGHT]: Figure 15: Harder igneous rock was also used as aggregate (Pompeii).
Modern concrete is composed of 11% Portland Cement and 67% aggregate and 22% air and water (PCA, 2013). Modern aggregate consists of gravel, crushed stone, and sand of various sizes. A good variety of aggregate sizes allows the concrete to better bond into and stronger, interlocking structure as it cures. Gotti et al. (2008) determined through Vitruvius that Ancient Roman concrete was a 65% cement-mortar paste and 35% aggregate with an ambiguous and liberal application of water with the cement paste. The aggregate was generally made of porous rocks including tuff and sandstone was well as finer aggregate such as sand and occasionally broken/disposed pottery pieces. Figures 14, 15, 16, and 17 all illustrate the various aggregate found in Ancient Roman concrete.
[LEFT]: Figure 16: Small pieces of various volcanic and sedimentary rock are mixed along with the cement before being applied with the rubble stones (Ostia Antica).
[RIGHT]: Figure 17: Occasion tile pieces are visible today in the ancient concrete (Ostica Antica).
A curious note is how the cement-aggregate ratios contrast so highly between ancient and modern concrete. Ancient Roman concrete had a much higher cement paste composition of 65% while, almost reversed, modern concrete actually has 67% aggregate. One possibility explaining this discrepancy can be seen in the figures above. The Ancient Romans lacked a good aggregate-size gradient, or variation. Ancient Roman aggregate sizes were mostly all medium sized chunks of rock and stone with relatively small sand grains and rock flecks. To compensate for the large spaces between the aggregate, the Ancient Romans had to apply more cement paste to generate a good structural bond.
Table # below summarizes the ancient pozzolan materials in addition to the general chemical composition of modern Portland cement. At first glance, significant differences exist between all chemical components. Most notably is the CaO, content (Moore, 1995; Winter, 2013).
This large difference can be accounted for my recalling how Ancient Romans used their cement. Recall that the Ancient Romans first made a lime based mortar through slaking. The hydrated lime was then mixed with the pozzolan materials on-site directly before placing. The pozzolan material in Table 3 is before the addition of the lime material (CaO), thus explaining the only trace amounts of CaO.
On the other hand Portland cement, through the sintering in the kiln, combines the lime material (CaO) into the powder like cement that would later be hydrated with water on-site before placing. If the lime component (CaO) is neglected from the Portland cement and the chemical composition recalculated, the cements appear much more similar. This is illustrated below by Table 4. Now the chemical compositions of the cementing materials are much more alike. This chemical composition is what allows the complex process of hydration to occur in both ancient and modern cement.
The only remaining difference is the increased amount of the ferrite in modern cements. Hydration will not be vastly affected with the change in ferrite because hydration, and strength, is more dependent on the C-S-H compound forming from calcium and silica components. If the ferrite is negligible during hydration, it is probably a factor in the sintering of modern cement, the other major difference in modern and ancient cement processes. It is postulated that the liquid ferrite phase in the kiln is increased to further promote ion mobility and fusing speed during the sintering process. Thus, the abundant ferrite compound is present for increasing the efficiency of industrialized cement production in the modern world.
Concrete is an artificial rock that has become an essential building material in the modern world. The technology was first developed by the Ancient Romans over 2000 years ago partially through geological luck and partially through strong observations. The volcanic ash settling in the southern regions of Italy provided the Ancient Romans with pozzolan, an essential ingredient for the hydration and curing of concrete. The technology was then forgotten and lost with the fall of the Roman Empire starting in the 3rd century AD. Only in the past 250 years has modern concrete redeveloped. Although modern and ancient concrete technology, especially cement processing and mixing methods, differs in many ways, the fundamental chemical composition remains mostly the same. The only differences in ancient and modern concrete appear in placement methods, aggregate, and small cement mixture variation due to modern industrialization. One can only wonder what modern civilization would be like today if Ancient Roman engineering survived and continued to advance.
Adam, J. (1994). Roman Building: Methods and Techniques. (A. Mathews, Trans.). Abington, England: B.T. Batsford Ltd. (Original work published 1989)
Gotti, E., Oleson, J.P., Bottalico, L., Brandon, C., Cucitore, R., & Hohlfelder, R.L. (2008). A Comparison of the Chemical and Engineering Characteristics of Ancient Roman Hydraulic Concrete with a Modern Reproduction of a Vitruvian Hydraulic Concrete. Archaeometry, 50,576-590. doi: 10.1111/j.1475-4754.2007.00371.x
Moore, D. (1995). The Romand Pantheon:The Triumph of Roman Concrete. Mangilao, Guam: David Moore.
PCA, Portland Cement Association. (2013). Concrete Basics. Retrieved from http://www.cement.org/basics/concretebasics_concretebasics.asp
PCA, Portland Cement Association. (2013). Supplementary Cementing Materials. Retrieved from http://cement.org/basics/concretebasics_supplementary.asp
Vitruvious. (1914). The Ten Books of Architecture. (M.H. Morgan and A.A. Howard, Trans.). London, England: Humphrey Milford Oxford University Press. (Original work published c. 15 BC)
Winter, N., (2013). Cement hydration. Retrieved from http://understanding-cement.com/hydration
Winter, N., (2013). Portland cement clinker – overview. Retrieved from http://understanding-cement.com/clinker
|
How often has this scenario played out in your classroom? You’ve planned a fantastic lesson that involves students working together and learning together. In your well-crafted plans, the students are engaged in the activities, supporting one another and growing as a learning activity. Sounds wonderful, doesn’t it? So why don’t these activities always work out as planned?
Although there may be many factors at play, it could simply be that children are unable to work together properly because they do not know how to support one another. By using cooperative games ways, children will become critical thinkers, learn to work with one another and apply skills to accomplish team goals.
Cooperative games help children develop the essential skills of cooperation, communication, empathy and conflict resolution by giving them an opportunity to work together toward a common goal. These games require the skills of everyone in the group, not of just one or two people. There is no sole winner. All children will benefit since no one is left out and the focus is on the success of the team as a whole. Throughout this process of cooperation, children are critically thinking of their strategies and making quick decisions, while they are verbally and physically interacting with one another and therefore, developing their cognitive abilities. Children learn how individual efforts unite to help the team accomplish goals. Think about that child in your class who has great ideas, but is not athletic or competitive. How do we address such needs when that child does not want to participate in the competitive aspect of games?
Contrasting this with competitive games, in which there is direct competition between individuals or groups as it can produce poor self-esteem with those who are on the losing end. Not all children have that competitive edge needed in order to win. This is why you’ll see why cooperative games can play such a big role in teaching and reinforcing “peacemaking” skills.
I played a game called “Everybody Wins Race.” Two teams in groups of fours are side-by-side and arm in arm. They are to race across the room and back but when they come to the finish line, the two teams must cross the finish line at the same time.
Creating Guidelines & Goals
Have participants create ground rules or guidelines before you begin cooperative games. Brainstorm potential rules and write them down, but avoid too many rules. Here are some basic rules:
1. Safety first. No one gets hurt. Never compromise the safety of yourself or others.
2. Everyone plays (i.e. no one is excluded and the games are structured so that everyone can join in)
3. Challenge by choice. If someone wants to sit out, that’s okay.
4. Everyone has fun
5. Everyone wins
Can you add more thoughts between Cooperative and Competitive games?
|
Discover the structure of the materials that make up our modern world and learn how this underlying structure influences the properties and performance of these materials.
Structure determines so much about a material: its properties, its potential applications, and its performance within those applications. This course from MIT’s Department of Materials Science and Engineering explores the structure of a wide variety of materials with current-day engineering applications.
The course begins with an introduction to amorphous materials. We explore glasses and polymers, learn about the factors that influence their structure, and learn how materials scientists measure and describe the structure of these materials.
Then we begin a discussion of the crystalline state, exploring what it means for a material to be crystalline, how we describe directions in a crystal, and how we can determine the structure of crystal through x-ray diffraction. We explore the underlying crystalline structures that underpin so many of the materials that surround us. Finally, we look at how tensors can be used to represent the properties of three-dimensional materials, and we consider how symmetry places constraints on the properties of materials.
We move on to an exploration of quasi-, plastic, and liquid crystals. Then, we learn about the point defects that are present in all crystals, and we will learn how the presence of these defects lead to diffusion in materials. Next, we will explore dislocations in materials. We will introduce the descriptors that we use to describe dislocations, we will learn about dislocation motion, and will consider how dislocations dramatically affect the strength of materials. Finally, we will explore how defects can be used to strengthen materials, and we will learn about the properties of higher-order defects such as stacking faults and grain boundaries.
What will you learn
- How we characterize the structure of glasses and polymers
- The principles of x-ray diffraction that allow us to probe the structure of crystals
- How the symmetry of a material influences its materials properties
- The properties of liquid crystals and how these materials are used in modern display technologies
- How defects impact numerous properties of materials—from the conductivity of semiconductors to the strength of structural materials
|
The first thermonuclear bomb was exploded in 1952 at Enewetak by the United States, the second in 1953 by Russia (then the USSR). Great Britain, France, and China have also exploded thermonuclear bombs, and these five nations comprise the so-called nuclear club—nations that have the capability to produce nuclear weapons and admit to maintaining an inventory of them. The three smaller Soviet successor states that inherited nuclear arsenals (Ukraine, Kazakhstan, and Belarus) relinquished all nuclear warheads, which have been removed to Russia. Several other nations either have tested thermonuclear devices or claim to have the capability to produce them, but officially state that they do not maintain a stockpile of such weapons; among these are India, Israel, and Pakistan. South Africa's apartheid regime built six nuclear bombs but dismantled them later.
The presumable structure of a thermonuclear bomb is as follows: at its center is an atomic bomb; surrounding it is a layer of lithium deuteride (a compound of lithium and deuterium, the isotope of hydrogen with mass number 2); around it is a tamper, a thick outer layer, frequently of fissionable material, that holds the contents together in order to obtain a larger explosion. Neutrons from the atomic explosion cause the lithium to fission into helium, tritium (the isotope of hydrogen with mass number 3), and energy. The atomic explosion also supplies the temperatures needed for the subsequent fusion of deuterium with tritium, and of tritium with tritium (50,000,000°C and 400,000,000°C, respectively). Enough neutrons are produced in the fusion reactions to produce further fission in the core and to initiate fission in the tamper.
Since the fusion reaction produces mostly neutrons and very little that is radioactive, the concept of a
clean bomb has resulted: one having a small atomic trigger, a less fissionable tamper, and therefore less radioactive fallout . Carrying this progression further results in the
dirty bomb having a cobalt tamper. Instead of generating additional explosive force from fission of the uranium, the cobalt is transmuted into cobalt-60, which has a half-life of 5.26 years and produces energetic (and thus penetrating) gamma rays. The half-life of Co-60 is just long enough so that airborne particles will settle and coat the earth's surface before significant decay has occurred, thus making it impractical to hide in shelters. This prompted physicist Leo Szilard to call it a
doomsday device since it was capable of wiping out life on earth.
Like other types of nuclear explosion, the explosion of a hydrogen bomb creates an extremely hot zone near its center. In this zone, because of the high temperature, nearly all of the matter present is vaporized to form a gas at extremely high pressure. A sudden overpressure, i.e., a pressure far in excess of atmospheric pressure, propagates away from the center of the explosion as a shock wave, decreasing in strength as it travels. It is this wave, containing most of the energy released, that is responsible for the major part of the destructive mechanical effects of a nuclear explosion. The details of shock wave propagation and its effects vary depending on whether the burst is in the air, underwater, or underground.
See R. Rhodes, Dark Sun: The Making of the Hydrogen Bomb (1995).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Military Affairs (nonnaval)
|
ANIMAL FORAGING BEHAVIOR
Animals and humans alike must forage for food in order to survive. Foraging is the instinctive behavior of searching for and obtaining food. Several factors affect the ability to forage and acquire profitable resources.
FORAGING IN THE WILD
THE SIMPLE FACT: A wild bird is challenged to find food daily in order to survive and this will require a substantial amount of very hard work.
Living in their natural environment all birds must allocate time and effort to forage for food. All food items have both a cost (time and energy) and benefit (net food value). Cost: wasted energy, loss of valuable time, risk from exposure to predators Benefits: net energy intake (calories consumed) per unit of time. The relative value of each of these determines how much “profit” each specific food item represents. Birds do exhibit the ability to modifyspecific behaviors to achieve a balance between cost and benefits Foraging strategies may vary with different patterns of resource availability. A resource Patch (a defined area with the probability of available food items) must be located. Patch choice and value
assessment is determined by the economic law of diminishing returns. Patch “profitability” is determined by the energy yield of food items divided by handling and processing time after the resource has been located. As a result of foraging the resources within a patch will be depleted, however, the risk of poor performance is non existent as the forager will quickly search for more fertile ground.
COMPANION BIRD FORAGING
AS A RULE: Our pet birds are only a few generations removed from their free ranging counterparts, that not withstanding, they continue to exhibit similar or like behaviors.
Captive bred, hand raised birds, do not enjoy the benefits of social learning or social transmission of foraging information. In a controlled environment a daily food supply is available 24/7. Typically this is conveniently located just inches need away from the primary perch, as if by design there is no stimulation associated with food acquisition. This human intervention suppresses the bird’s natural instinct to forage in an environment devoid of enrichment. The time previously budgeted for food acquisition, six to eight hours per day, is reduced to 30 minutes, of equal importance there is no effort or work involved. The biological clock is now completely out of sync. The end result of this dilemma is the bird now has an inordinate amount of inactive, unproductive time available, no means of filling the void and no opportunity to make choices. Consider that which is lacking from a pet birds time activity budget. Food finding and acquisition, stressors related to conditions or changes in the environment, social interactions, predator avoidance, territorial defense, mate selection, courtship rituals, raising offspring, nest construction, etc. The behavioral consequences of these losses can only produce a result which neither bird nor human understand. To complicate matters even further, most pet birds have had their ability to fly impaired, albeit for their safety when flying and food foraging behaviors are eliminated from a parrot’s daily activities; the most significant long term effect may be the restrictions placed on their ability to make cognitive choices.
The obvious solution to this problem would be to offer some positive enrichment in the environment. Encourage the bird to spend a significant portion of the day foraging, for example, working for food. Optimally this requires a change in the feeding regimen, moving away from free food availability and initiating a foraging system whereby the bird must work to obtain food, The responsibility for making a successful transition lies with the primary caregiver. Within the avian professional community of researchers, avian veterinarians and behaviorists we are learning that how a bird eats may be equally as important as what a bird eats. It is universally agreed that creating a more stimulating environment and promoting natural foraging behavior will improve your birds overall psychological and physical well being. And so it goes…we all must work in order to feed ourselves, including Mr. or Mrs. Parrot.
THE SIMPLE FACT: Man, along with all other members of the animal kingdom, must forage for food in order to survive.
You too, are foraging as you trek to the market and patiently guide the shopping cart thru the aisles checking prices, choosing preferred brands, fulfilling menus, selecting the best fruit, vegetables, meats, etc. You must return home and stow your bounty until it is time to prepare a meal or you may decide to have dinner at a restaurant in which case you will travel to your destination, partake in some light social interaction, make your dinner selection, eat and return home. It is no coincidence that this scenario sounds so familiar!
|
How to teach one Hour of Code in after-school
1) Watch this how-to video
2) Choose a tutorial:
We provide a variety of fun, hour-long tutorials for participants all ages, created by a variety of partners. Try them out!
All Hour of Code tutorials:
- Require minimal prep-time for organizers
- Are self-guided - allowing kids to work at their own pace and skill-level
Need a lesson plan for your afterschool Hour of Code? Check out this template!
3) Promote your Hour of Code
Promote your Hour of Code with these tools and encourage others to host their own events.
4) Plan your technology needs - computers are optional
The best Hour of Code experience includes Internet-connected computers. But you don’t need a computer for every child, and you can even do the Hour of Code without a computer at all.
Plan Ahead! Do the following before your event starts:
- Test tutorials on student computers or devices. Make sure they work properly on browsers with sound and video.
- Provide headphones for your class, or ask students to bring their own, if the tutorial you choose works best with sound.
Don't have enough devices? Use pair programming. When students partner up, they help each other and rely less on the teacher. They’ll also see that computer science is social and collaborative.
Have low bandwidth? Plan to show videos at the front of the class, so each student isn't downloading their own videos. Or try the unplugged / offline tutorials.
5) Start your Hour of Code off with an inspiring video
Kick off your Hour of Code by inspiring participants and discussing how computer science impacts every part of our lives.
Show an inspirational video:
It’s okay if you are all brand new to computer science. Here are some ideas to introduce your Hour of Code activity:
- Explain ways technology impacts our lives, with examples both boys and girls will care about (Talk about apps and technology that is used to save lives, help people, connect people etc).
- List things that use code in everyday life.
- See tips for getting girls interested in computer science here.
Need more guidance? Download this template lesson plan.
Want more teaching ideas?
Check out best practices from experienced educators.
Direct participants to the activity
When someone comes across difficulties it's okay to respond:
- “I don’t know. Let’s figure this out together.”
- “Technology doesn’t always work out the way we want.”
- “Learning to program is like learning a new language; you won’t be fluent right away.”
What to do if someone finishes early?
- Encourage participants to try another Hour of Code activity at hourofcode.com/learn
- Or, ask those who finish early to help others who are having trouble.
Other Hour of Code resources for educators:
What comes after the Hour of Code?
The Hour of Code is just the first step on a journey to learn more about how technology works and how to create software applications. To continue this journey:
- Encourage students to continue to learn online.
Attend a 1-day, in-person workshop to receive instruction from an experienced computer science facilitator. (US educators only)
|
True and false color views of an equatorial "hotspot" on Jupiter. These images cover an area 34,000 kilometers by 11,000 kilometers. The top mosaic combines the violet (410 nanometers or nm) and near-infrared continuum (756 nm) filter images to create an image similar to how Jupiter would appear to human eyes. Differences in coloration are due to the composition and abundances of trace chemicals in Jupiter's atmosphere. The bottom mosaic uses Galileo's three near-infrared wavelengths (756 nm, 727 nm, and 889 nm displayed in red, green, and blue) to show variations in cloud height and thickness. Bluish clouds are high and thin, reddish clouds are low, and white clouds are high and thick. The dark blue hotspot in the center is a hole in the deep cloud with an overlying thin haze. The light blue region to the left is covered by a very high haze layer. The multicolored region to the right has overlapping cloud layers of different heights. Galileo is the first spacecraft to distinguish cloud layers on Jupiter.
North is at the top. The mosaics cover latitudes 1 to 10 degrees and are centered at longitude 336 degrees West. The smallest resolved features are tens of kilometers in size. These images were taken on December 17, 1996, at a range of 1.5 million kilometers by the Solid State Imaging system aboard NASA's Galileo spacecraft.
The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.
This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo
|
Science, technology, engineering and math skills were in full force
during a STEM activity for fourth-graders at Wading River School as part
of the district’s Mystery Science program, an innovative approach to
learning that is aligned to the Next Generation Science Standards.
The students built a bumper roller coaster with hills to examine how
height affects the energy produced by a roller coaster and explored what
occurs when the second hill of a coaster is higher than the first. They
released a marble at different points on the track to get both a target
marble and the starting marble in a cup at the end of the track.
The experiment helped the students to build a deeper understanding of
energy and the energy transfer that happens when two objects collide.
Their hands-on activity brought into focus engineering concepts and how
testing their hypotheses form their results.
|
& Tornado Alley
(when the mean value of the
random variable is known)
as a Function of Sample Size
Suppose the probability density distribution for x is
The square of x can then only have values between 0 and 0.25. Thus the probability density function for w=x2 is given by
Below are shown the histograms for 2000 repetitions of taking samples of n random variables and computing the sum of the squares of a random variable which is uniformly distributed between -0.5 and +0.5. With larger n the distribution would be more concentrated so the horizontal scale is adjusted with sample size. Althought the random variable is distributed between -0.5 and +0.5 its square is distributed between 0 and 0.25 and the positive square root between 0 and +0.5.
Each time the display is refreshed a new batch of 2000 samples is created.
As can be seen, as the sample size n gets larger the distribution of sample variance more closely approximates the shape of the normal distribution.
Although the distribution for n=1 is decidedly non-normal, for n=16 the distribution looks quite close to a normal distribution even though the sample value can take on only positive values.
If the square root is taken of the mean value of the squares the distributions of the results are as is shown below:
The positive square root of the square of the random variable is distributed from 0 to 0.5. Although the distributions for larger sample size look generally like normal distributions they are transforms of normal distributions.
HOME PAGE OF Thayer Watkins
|
Bacterial DNA Replication and Cell Division Help
Bacterial DNA Replication
The circular chromosome of bacteria presents special problems for replication. Circular chromosomes usually have a single site, called the origin, or ori, site, at which replication originates. By contrast, many ori sites exist on each chromosome of eukaryotes. Once the replication process starts, it usually proceeds bidirectionally from the ori site to form two replication forks.
As the two strands of a right-handed, double-helical, circular DNA unwind during replication, the molecule tends to become positively supercoiled or overwound, i.e., twisted in the same direction as the strands of the double helix. These supercoils are so tight that they would interfere with further replication if they were not removed. Topoisomerases are a group of enzymes that can change the topological or configurational shape of DNA. DNA gyrase is a bacterial topoisomerase that makes double-stranded cuts in the DNA, holds on to the broken ends so they cannot rotate, passes an intact segment of DNA through the break, and then reseals the break on the other side (Fig. 10-2).
Fig. 10-2. A proposed mechanism whereby DNA gyrase "pumps" negative supercoiling into DNA. A relaxed, covalently closed, circular DNA molecule (a) is bent into a configuration for strand passage (b). DNA gyrase makes double-stranded cuts (c), holds on to the ends, passes an intact segment through the break, and reseals the break on the other side (d).
This action of DNA gyrase quickly removes positive supercoils and momentarily relaxes the DNA molecule into a more energetically stable state. However, with the expenditure of energy, DNA gyrase normally pumps negative supercoiling or under winding (twisting in a direction opposite to the turns of the double helix) into relaxed DNA circles so that virtually all DNAs in both prokaryotes and eukaryotes naturally exist in the negative super-coiled state. Relaxed circles and positively super-coiled DNA exist only in the laboratory. Localized regions of DNA transiently and spontaneously unwind to single-stranded "bubbles" and then return to their former topology as hydrogen bonds between complementary base pairs are broken and reformed by thermal agitation. The strain of underwinding is thus momentarily relieved in a superhelix by an increase in the number, size, and duration of these bubbles. An equilibrium normally exists between these super-coiled and "bubbled" states. More bubbles form as the temperature increases.
At each replication fork, an enzyme called helicase unwinds the two DNA strands. Single-stranded, DNA binding (SSB) proteins protect the single-stranded regions in the replication forks from forming intrastrand base pairings that could cause a tangle of partially double-stranded segments that would interfere with replication. The enzyme primase synthesizes short RNA primers using a region on each strand as a template. Primers are required for DNA polymerase to begin extending the new DNA strand because DNA polymerase requires a 3'OH to initiate the bonding reaction.
Three DNA polymerase enzymes (referred to as pol I, pol II, and pol III) have been found in E. coli. Pol III is the principal replicating enzyme. Gaps left by pol III are filled by pol I, and DNA ligase seals the nicks. The function of pol II is not well established, although it is known that it is not involved in RNA primer replacement. In addition to their 5' to 3' synthetic activity, both pol I and pol III have 3' to 5' exonuclease activity, which plays a "proofreading" role by removing mismatched bases mistakenly inserted during chain polymerization. Pol I also has 5' to 3' exonuclease activity by which it normally removes primers and replaces them with complementary DNA sequences after polymerization has begun. About halfway through the above replication process, the replicative intermediate molecule looks like the Greek letter theta (θ), so is referred to as theta replication (Fig. 10-3).
- Kindergarten Sight Words List
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Child Development Theories
- Definitions of Social Studies
- Grammar Lesson: Complete and Simple Predicates
- Social Cognitive Theory
- Signs Your Child Might Have Asperger's Syndrome
- Theories of Learning
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
|
Research conducted at UCLA and published in theJournal of Biological Chemistry (December 2004), which has been confirmed by further research published in the Journal of Agricultural and Food Chemistry (April 2006), provides insight into the mechanisms behind curcumin’s protective effects against Alzheimer’s disease.
Alzheimer’s disease results when a protein fragment called amyloid-B accumulates in brain cells, producing oxidative stress and inflammation, and forming plaques between nerve cells (neurons) in the brain that disrupt brain function.
Amyloid is a general term for protein fragments that the body produces normally. Amyloid-B is a protein fragment snipped from another protein called amyloid precursor protein (APP). In a healthy brain, these protein fragments are broken down and eliminated. In Alzheimer’s disease, the fragments accumulate, forming hard, insoluble plaques between brain cells.
The UCLA researchers first conducted test tube studies in which curcumin was shown to inhibit amyloid-B aggregation and to dissolve amyloid fibrils more effectively than the anti-inflammatory drugs ibuprofen and naproxen. Then, using live mice, the researchers found that curcumin crosses the blood brain barrier and binds to small amyloid-B species. Once bound to curcumin, the amyloid-B protein fragments can no longer clump together to form plaques. Curcumin not only binds to amyloid-B, but also has anti-inflammatory and antioxidant properties, supplying additional protection to brain cells.
|
Some bacteria can form spores (survival capsules) that are particularly resistant to heat. Since sporogenous bacteria can also cause food poisoning and a reduction in food quality, they constitute a significant threat to the food industry.
If spores are to pose a risk, they have to "wake up" from a state of hibernation and return to their normal growth cycle through a process called germination. Irene Stranden Løvdal's doctoral research has studied the germination process in four different species of Bacillus. Her findings are of importance for the production and safety of foods with a long shelf-life.
Refrigerated foods with a shelf-life of several weeks are often heat-processed at temperatures between 65 95 C. This kills the majority of bacteria, but Bacillus spores can survive, germinate and develop into growing bacteria. Thermal treatment of this kind will in fact improve the growth potential of sporogenous bacteria because the heat kills competing bacteria flora and stimulates the surviving spores so that germination can commence more rapidly. The thermal treatment can increase the risk of spore germination in food and result in subsequent bacterial growth and a risk of quality deterioration and food poisoning.
Løvdal has investigated how the germination characteristics of the spores of four different Bacillus species are affected by heat treatment. She has used knowledge about the spores' response to temperature to experiment with a method which can reduce the spore level without increasing the overall thermal treatment of the food product.
The method, called double heat treatment, involved warming up the food first in order to activate the spores, then lowering the temperature to allow germination and then increasing the heat once more in order to kill the germinated spores. The effect of this procedure varied from food to food but in some cases, the level of spores was reduced by more than 99.9%.
Løvdal has also studied some of the more fundamental, genetic elements linked to the germination of spores belonging to the species B. licheniformis. Closely related, sporogenous bacteria have a genomic area called gerA. This codes for a receptor which registers the presence of specific nutrients (germinants) that can trigger germination. Løvdal discovered that this genomic area in B. licheniformis is important for germination processes initiated by amino acid germinants.
The doctoral research was carried out at The Norwegian School of Veterinary Science (NVH) and at Nofima in Stavanger. Researchers and fellows at The Norwegian Defence Research Establishment were also key collaborators.
Explore further: Genetic molecular mechanisms of neural development identified
More information: Irene Stranden Løvdal defended her doctoral thesis on 3rd February 2012 at The Norwegian School of Veterinary Science. The thesis is entitled: "Germination of Bacillus species related to food spoilage and safety.
|
Evapotranspiration of Hawaiʻi
This website provides a set of maps of the spatial patterns of evapotranspiration for the major Hawaiian Islands. To estimate evapotranspiration, numerous other variables, such as solar radiation, air temperature, and relative humidity to name a few, had to be estimated. Those are included here, too. Most variables are mapped for each hour of the average 24-hour cycle of each month and for each hour of the average 24-hour cycle for the whole year. The average value for each month and the annual average are also mapped. In developing the evapotranspiration estimates, more than 12,000 maps were created. Many of those maps are available via this website, in the form of downloadable files and, for a selection of variables, on the interactive mapping tool.
Be sure to check out the interactive map! It may need a few minutes to load on your first visit.
The Hydrologic Cycle
Source: Anishct (Own work) [Public domain], via Wikimedia Commons
Water in our environment is cycled by processes that move and transform water. Clouds form when moist air is cooled. Precipitation happens when water drops or ice particles become big enough to fall from clouds. Rainwater can recharge soil water, groundwater, streams, rivers, and lakes. Some is used by plants, which transpire the water back to the air. And some is evaporated directly from wet leaves and soil. All the transpired and evaporated water then becomes available to form clouds and rain. This sequence is called “the hydrologic cycle” and it sustains life on earth. Understanding and quantifying the movement of water in the hydrological cycle is needed to help manage our water resources, protect our natural environment, and anticipate how climate change, land development, and species invasion will affect natural ecosystems, agriculture, and domestic water availability in the future.
Source: Mwtoews (Own work) [GFDL
Understanding the hydrologic cycle starts with measuring and mapping rainfall. In the Rainfall Atlas of Hawai‘i, detailed analysis of rainfall data provides a comprehensive picture of the spatial patterns of rainfall in Hawai‘i. Equally important, though much less obvious and much more difficult to assess, is evapotranspiration, the combination of processes that takes water from the surface and transforms it into water vapor in the air. These processes include the movement of water through plant roots and the evaporation of that water through pores in the plant’s leaves, a process called transpiration. Water on the outsides of leaves, such as water deposited by rain or fog interception, can be evaporated, a process called wet canopy evaporation. Water can also evaporate directly from moist soil, soil evaporation. The sum of these three components is called evapotranspiration (ET).
ET is highly variable through time and from place to place. Many variables influence evapotranspiration, including those related to climate (e.g., solar radiation, air temperature, humidity, and wind), the characteristics of the vegetation (e.g., plant type, height, density, amount of leaves, and root depth), and the properties and status of the soil (e.g., soil texture, porosity, water holding capacity, and soil moisture content). Direct measurements of ET are difficult and expensive, and cannot be done extensively enough to capture the spatial ET patterns. Therefore, it is necessary to estimate ET using models that incorporate information on the climate, vegetation, and soil factors that influence ET. More information can be found on our Methods page.
The Evapotranspiration of Hawai‘i website provides access to a set of maps of the spatial patterns of evapotranspiration (ET), its components (transpiration, wet canopy evaporation, and soil evaporation), potential evapotranspiration (PET), and the climatic and land characteristic variables used to estimate them for the major Hawaiian Islands. In general, each variable is presented in the form of mean hourly maps for each hour of the diurnal cycle of each month and of the whole year, mean monthly maps for each month, and a mean annual map. The maps represent our best estimates of the mean values of each variable based on observations taken during the past decade or two.
This web site was developed to make the ET, PET, and climate maps, data, and related information easily accessible. The maps depict patterns by color. The interactive map allows users to see the spatial patterns of each variable, zoom in on areas of particular interest, navigate to specific locations with the help of a choice of different base maps, and click on any location to get the mean value of the selected variable, graphs of the mean annual cycle (mean monthly values) and mean diurnal cycle (mean hourly values) of the selected variable, and obtain tables of the mean hourly, monthly, and annual values of all variables for the selected location.
ET, PET, and climate maps can also be downloaded in various forms. Our analysis produced digital maps called rasters or grids. On these maps, the islands are divided into 8.1-arcsecond spatial units, or approximately 234 × 250 m (770 × 820 ft). Each variable is estimated for each spatial unit. GIS (Geographic Information Systems) users can obtain as raster files. Alternatively, image files showing spatial patterns by color can be downloaded.
This website is part of a family of websites providing data on the climate of Hawai‘i. The Rainfall Atlas of Hawai‘i covers only rainfall. The other three websites each provide data for all variables, but each is presented with a particular focus.
|
Although their dining habits tend toward the scatological, dung beetles also have a lofty sensibility, relying on the moonlit skies to provide them with a sense of direction. Behavioral zoologist Marie Dacke at the University of Lund in Sweden suspected the beetles were using the moon for navigational cues when she noticed they walk in crooked paths on overcast nights but move straight ahead when the moon is out. All natural light has a polarized component—some portion where all the rays are traveling in one plane. This might be what the beetles are following, Dacke thought, but moonlight is so much dimmer than sunlight that researchers had thought its polarized light would be impossible for animals to detect. To test her hypothesis, Dacke and her colleagues covered the beetles with filters that changed the polarity of the moon’s rays by 90 degrees. Sure enough, each beetle made a corresponding right-angled turn.
Dacke theorizes that the dung beetle’s sophisticated nighttime method of guidance helps it gather food more efficiently. As night falls, each beetle finds a dung pile, rolls it up into a ball, and quickly takes off with the prize to keep it away from rivals. Traveling straight ahead, reckoning by the moon’s polarized rays, often provides the fastest route to safety. “It’s the best way for them to escape the fierce competition of the dung pile,” Dacke says. This is the first recorded instance of an animal navigating by polarized moonlight, but she suspects that other insects, such as night-flying bees, may have similar capabilities.
|
October 16, 2012 in Animals & Insects
The 25 most endangered primates in the world have been identified in a new report released by the UN’s Convention on Biological Diversity COP11 earlier today. The report, titled “Primates in Peril: The World’s 25 Most Endangered Primates 2012-2014,” was created by the Primate Specialist Group of IUCN’s Species Survival Commission (SSC) and the International Primatological Society (IPS), in collaboration with Conservation International (CI) and the Bristol Conservation and Science Foundation (BCSF).
Primates are the closest living relatives of humans, and the majority of them are rapidly moving towards extinction as their populations and environments are reduced, primarily by humans. All of the world’s apes, monkeys, and ‘true’ lemurs are nearing the brink, and a large number of them have already lost so much genetic diversity that it almost seems inevitable that they will become extinct in the not too distant future. In particular, many rare subspecies of apes are nearing the brink, including the lion-eating Bili Apes (chimps).
The report identifies as the primary causes: deforestation and destruction of tropical forest habitat, encroaching development, illegal wildlife trade, and commercial bush meat hunting.
“The list features nine primate species from Asia, six from Madagascar, five from Africa and five from the Neotropics. In terms of individual countries, Madagascar tops the list with six of the 25 most endangered species. Vietnam has five, Indonesia three, Brazil two, and China, Colombia, Côte d’Ivoire, the Democratic Republic of Congo, Ecuador, Equatorial Guinea, Ghana, Kenya, Peru, Sri Lanka, Tanzania and Venezuela each have one.”
In the report, the authors highlight the case of the Pygmy Tarsier, living in southern and central Sulawesi. The extremely rare primate was only known by 3 specimens in museums until recently. In 2008 3 individuals were caught in the Lore Lindu National Park and another one was seen in the wild. The Pygmy Tarsier will almost certainly be extinct soon, as its few remaining populations are fragmented by human development.
“Madagascar’s lemurs are severely threatened by habitat-destruction and illegal-hunting, which has accelerated dramatically since the change of power in the country in 2009. The rarest lemur, the Northern Sportive Lemur (Lepilemur septentrionalis), is now down to 19 known individuals in the wild. A red-listing workshop on lemurs, held by the IUCN SSC Primate Specialist group in July this year, revealed that 91% of the 103 species and subspecies were threatened with extinction. This is one of the highest levels of threat ever recorded for a group of vertebrates.”
This new list was created primarily by primatologists who have a lot field experience, with first-hand knowledge of the causes for primate extinction and the threats currently posed to them.
“Once again, this report shows that the world’s primates are under increasing threat from human activities. Whilst we haven’t lost any primate species yet during this century, some of them are in very dire straits,” says Dr Christoph Schwitzer, Head of Research at the Bristol Conservation and Science Foundation (BCSF). “In particular the lemurs are now one of the world’s most endangered groups of mammals, after more than three years of political crisis and a lack of effective enforcement in their home country, Madagascar. A similar crisis is happening in South-East Asia, where trade in wildlife is bringing many primates very close to extinction.”
“Primates are our closest living relatives and probably the best flagship species for tropical rain forests, since more than 90% of all known primates occur in this endangered biome,” says Dr Russell Mittermeier, Chair of the IUCN SSC Primate Specialist Group and President of Conservation International.
“It’s also important to note that primates are a key element in their tropical forest homes,” adds Dr Mittermeier. “They often serve as seed dispersers and help to maintain forest diversity. It is increasingly being recognized that forests make a major contribution in terms of ecosystem services for people, providing drinking water, food and medicines.”
Conservation efforts so far have had some limited success, there have been several species removed from the list during the last 14 years. Among those are India’s Lion-Tailed Macaque (Macaca silenus) and Madagascar’s Greater Bamboo Lemur (Prolemur simus). Conservationists attribute their improved outlook for survival to their inclusion in previous reports and the large public response.
|
Protein, from the Greek proteios, or 'primary', is described as being a large number of amino acids linked by peptide bonds. They are the basis of living organisms account for over 50% of the dry weight of humans. They vary enormously, from the soluble forms found in food to the long fibrous forms used in connective tissues.
All proteins are combinations of about 20 differing amino acids each combined of hydrogen, carbon, nitrogen, oxygen and occasionally sulphur. The amino acids form peptide bonds between each other, thus forming a polypeptide chain. The huge number of possible amino acid combinations and the intricate primary, secondary, tertiary and quaternary structures they form explain why there are just so many different proteins.
The polypeptide chains are arranged and entwined in such a way that the hydrophilic amino acids face outwards in order that they can interact with other structures such as another protein.
Most organisms can only create some of their required amino acids, the rest must be obtained from food. The exception to this is plants, which can manufacture all their required amino acids and therefore proteins through photosynthesis. For adults, the suggested daily intake of protein is 0.79g/kg of body weight. However for children this is doubled and for infants is tripled due to the requirements of rapid growth.
Protein deficiencies can cause diseases, such as Kwashiorkor, a disease found amongst children in the African tropics that wastes the body.
The structures of proteins come into four categories, as mentioned earlier. Firstly, the primary structure, which refers to the linear sequence of amino acids in the chain. Secondly, the secondary structure, which is the coiling and folding of the primary into the 3d forms of the alpha helix and the beta pleated sheet. This shape is formed by hydrogen bonding...
|
1. The sound processor captures and digitises sound.
2. The antenna is magnetically attached to the skin and transmits the digitised sound from the sound processor to the implant receiver.
3. The magnetic implant receiver is fitted under the skin directly under the antenna. It transforms the digital information into an electronic signal sent to the cochlea.
4. The electrode array is inserted in the cochlea. Each electrode on the array corresponds to a signal frequency.
5. When the encoded signal is transmitted to the corresponding electrode, the auditory nerve is stimulated.
6. The brain receives the sound transmitted via the auditory nerve.
|
I begin today by writing 3 numbers, 871, 258, 469 on the board. I ask students to write the numbers in order from least to greatest in their math journals.
I ask for a student to come up and write the numbers on the board in order. We read the numbers aloud together to practice reading 3-digit numbers.
I put 3 more numbers on the board, 29, 902 and 219 on the board and ask students to do the same thing. We repeat the process several more times as I circulate around the room to check on student understanding. I want to make sure that students understand the structure of numbers to 1000 - that they are made up of hundreds, tens and ones. The examples I use become more and more similar, in that the numbers to be compared contain the same digits - such as 409, 904, 914 and 419.
Students have been reading about planets during reading time. They have taken notes on important facts about planets, including size of the planets. We discuss how the world globe we have in the room is only a model of earth. It is way smaller than earth really is. I tell them that we can think of the planets in a similar way. We look at the sizes of the planets in distance around by using the numbers that students researched, and together we organize them from smallest to biggest.
Next I hand out paper in different sizes. I ask students why they think that some people are getting small paper and some are getting large paper to draw their planets on? (some planets are bigger than others based on the sizes we just figured out from our research). I ask students to make a drawing of the planet they have researched. I tell them that we will hang up the planets in order once we are done. We do not use an exact scale here because of the size of the numbers, and that is why I have chosen to hand out different sized paper so we end up with relative sizes for our planets.
Students complete the drawings in partners so we have one of each planet, and a sun.
Next I put the following distances on the board in mixed order. I write the name of the planet and the distance in million kilometers on the board. I ask students what a kilometer is close to that we might be more familiar with? (a mile). So today we are going to talk about measuring in kilometers which are close to, but a bit smaller than a mile. I ask about how far the middle school is from our school (you can pick a land mark in your area that is about a mile away) Yes, it is about a kilometer or a mile away so we would want to think about a million of those distances. and for each one even more than 1 million so we are talking about huge distances. (I ask students if they think Florida is more than a million or less than a million kilometers from Maine because many of the students have visited Florida. (right, it is much less than a million so we are talking about further than a trip to Florida, or California.
Next I discuss how each of these is not just the number, but a million of that number so that the 60 becomes 60,000,000, but that to make our job easier, we will just use the 60 because they are all in the millions.
I ask students to order the planets from closest to the sun to furthest from the sun by ordering the numbers.
When students are done, we write the planets in order and distance. I then tell students that we will now make the numbers even more manageable so we can hang up the planets. We will turn 60 into 6, 110 into 11, etc. We make all the numbers into smiley face numbers (ending in zero) and compute the distance in centimeters.
Next I hang the sun from the ceiling at the end of the room and place a small piece of masking tape below it on the floor to mark my starting point. I ask a student to measure 6 cm from the point I am marking so I can hang Mercury up. I remind students that our planets are not to the same scale (1 centimeter = 10 million kilometers ) because then our planets would be so small we wouldn't be able to see them so while they are relative in size to each other, they would be much smaller if we used the same measurements as we are doing for distance from the sun. It is like our maps of the whole earth and the map of the United States. On the world map the US is much smaller than it is on the map just of the US. in the same way our planets and our distances are much smaller than the real thing, but our planets need to be big enough to see and our distances need to be small enough to fit in the room.
After I have hung the first planet, I ask a student to measure 11 cm from the sun mark on the floor so I can hang the second planet.
I remind students that they need to be very careful as they measure and attend to the correct measurements so our planets will be the correct distances from the sun (MP6). They must also be careful to measure in centimeters and not inches (MP5). As the planets go beyond the length of the ruler, students will need to choose a meter stick or tape measure to carry out their measurement tasks. I let students choose the appropriate tool for the measurement they are making.(MP5)
I hang the planets from the ceiling as students measure the distance on the floor using a ruler.
Mercury - 60m km
Venus - 110 m km
Earth - 150 m km
Mars - 225 m km
Jupiter - 778 m km
Saturn - 1425 m km
Uranus - 2900 m km
Neptune - 4500 m km
|
The article I wrote yesterday was just the beginning, today we’ll look at the next step in becoming Haskell experts.
Yesterday we’ve learned how to split up our program and how to compile, or run it. Today we’ll look at some basic features of Haskell.
Haskell is strongly typed, meaning that the compiler knows the type of every object we have in our program. Yet we won’t see types too often in Haskell because it uses type inference, which means that the compiler deduces the type of your objects. We can help it by providing type information, or include it to be sure that our objects have the specified type.
Functions are often very generic and require not a specific type, but require that the type is the same at certain places. Let’s look at the type of the
map function (we query the type via GHCi, the interactive shell of GHC, just start it with
Prelude> :t map map :: (a -> b) -> [a] -> [b]
So, map has the type: “first parameter is a function which takes type ‘a’ and returns type ‘b’, the second parameter is a list of objects with type ‘a’ and the returned object is of type list of objects of type ‘b’”. So, map takes function and a list, and returns another list. ‘a’ and ‘b’ aren’t further specified, and
map really doesn’t care what is is.
Look at this example:
map (\x -> x 40) [(+1), (*2), (/4)]
here the objects in the list are (partially applied) functions, and the function to map on the list is a lambda (an anonymous function), which applies the given argument to 40.
If we run the above code (again in ghci), we get:
Prelude> map (\x -> x 40) [(+1), (*2), (/4)] [41.0,80.0,10.0]
Please note, that a list is always composed of objects of the same type, so in this case it’s the most general one. We can’t mix integers and characters in a list, but we can use tuples:
The previous example used a lambda function, this is nothing special, just a function without a name. We use it if we define some small function which isn’t used more than once.
Let’s define a lambda which takes two integers and returns the product of the two:
\x y -> x*y
We could also give it a name:
product = \x y -> x*y
but then we would write it like
product x y = x*y
Partially applied functions
This is where I felt the urge to learn Haskell: we can partially apply a function! We’ve already seen it above (for example
Let’s define a function to increment an integer,
inc1 a = a + 1
we could also write the above function as
inc2 = (+1)
The difference is the type,
inc1 returns an integer, whereas
inc2 returns a function! Of course both act the same, but there is a difference.
Lets look at another example:
map (+) [1,2,4,5]
this returns a list of partially applied functions! It returns
[(+1), (+2), (+4), (+5)].
One last example,
filter can be used to, well, filter some list. Lets filter out all number larger than 4:
filter (<=4) [1,2,3,4,5,6]
Again – we used a partially applied function! We can do quite a lot of amazing things with such easy and powerful abstractions, and that’s the reason why Haskell is known for it’s glue capabilities.
The last concept I’d like to show today is the combination operator: (.).
Using this operator, we create a new function which takes the arguments of the first, and returns the return-value of the second.
(f.g)(x) is the same as
f(g(x)). So, why is this important? Well, as I’ve said in my previous post, Haskell is full of syntactic sugar, so using the combination operator we save some keystrokes.
Here a simple example which extends the previous filter example to include only those numbers whose double is smaller than 5 (smaller or equal to 4).
filter ((<=4).(*2)) [1,2,3,4,5,6]
Another way would be to use a simple lambda function:
filter (\x -> (2*x) <= 4) [1,2,3,4,5,6]
The combination operator makes code immensely readable, no useless arguments and parentheses which clutter up our programs, just plain functionality.
Look at this imitation of the
rev command in Unix (taken from here):
main = interact (unlines . map reverse . lines)
could it get any simpler?
You know, take Lisp. You know, it’s the most beautiful language in the world. At least up until Haskell came along.
— Larry Wall (Creator of Perl)
|
Bound and unbound morphemes
|This article needs additional citations for verification. (December 2008) (Learn how and when to remove this template message)|
In morphology, a bound morpheme is a morpheme that appears only as part of a larger word; a free morpheme or unbound morpheme is one that can stand alone or can appear with other lexemes. A bound morpheme is also known as a bound form, and similarly a free morpheme is a free form.
Roots and affixes
Many roots are free morphemes (ship- in "shipment"), and others are bound. Roots normally carry lexical meaning. Words like chairman that contain two free morphemes (chair and man) are referred to as compound words.
Affixes are always bound in English, but some languages like Arabic have forms that sometimes affix to words and sometimes stand alone. English language affixes are almost exclusively prefixes or suffixes: pre- in "precaution" and -ment in "shipment". Affixes may be inflectional, indicating how a certain word relates to other words in a larger phrase, or derivational, changing either the part of speech or the actual meaning of a word.
Cranberry morphemes are a special form of bound morpheme whose independent meaning has been displaced and serves only to distinguish one word from another, like in cranberry, in which the free morpheme berry is preceded by the bound morpheme cran-, meaning "crane" from the earlier name for the berry, "crane berry".
Words can be formed purely from bound morphemes, as in English permit, ultimately from Latin per "through" + mittō "I send", where per- and -mit are bound morphemes in English. However, they are often thought of as simply a single morpheme.
A similar example is given in Chinese; most of its morphemes are monosyllabic and identified with a Chinese character because of the largely morphosyllabic script, but disyllabic words exist that cannot be analyzed into independent morphemes, such as 蝴蝶 húdié 'butterfly'. Then, the individual syllables and corresponding characters are used only in that word, and while they can be interpreted as bound morphemes 蝴 hú- and 蝶 -dié, it is more commonly considered a single disyllabic morpheme. See polysyllabic Chinese morphemes for further discussion.
Linguists usually distinguish between productive and unproductive forms when speaking about morphemes. For example, the morpheme ten- in tenant was originally derived from the Latin word tenere, "to hold", and the same basic meaning is seen in such words as "tenable" and "intention." But as ten- is not used in English to form new words, most linguists would not consider it to be a morpheme at all. 4
Analytic and synthetic languages
A language with a very low ratio of bound morphemes to unbound morphemes is an isolating language. Since such a language uses few bound morphemes, it expresses most grammatical relationships by word order so it is an analytic language.
In contrast, a language that uses a substantial number of bound morphemes to express grammatical relationships is a synthetic language.
- Kroeger, Paul (2005). Analyzing Grammar: An Introduction. Cambridge: Cambridge University Press. p. 13. ISBN 978-0-521-01653-7.
- Elson and Pickett, Beginning Morphology and Syntax, SIL, 1968, ISBN 0-88312-925-6, p6: Morphemes which may occur alone are called free forms; morphemes which never occur alone are called bound forms.
|This linguistic morphology article is a stub. You can help Wikipedia by expanding it.|
|
Fact Sheet on Stress
Q&A on Stress for Adults: How it affects your health and what you can do about it
Stress — just the word may be enough to set your nerves on edge. Everyone feels stressed from time to time. Some people may cope with stress more effectively or recover from stressful events quicker than others. It's important to know your limits when it comes to stress to avoid more serious health effects.
What is stress?
Stress can be defined as the brain's response to any demand. Many things can trigger this response, including change. Changes can be positive or negative, as well as real or perceived. They may be recurring, short-term, or long-term and may include things like commuting to and from school or work every day, traveling for a yearly vacation, or moving to another home. Changes can be mild and relatively harmless, such as winning a race, watching a scary movie, or riding a rollercoaster. Some changes are major, such as marriage or divorce, serious illness, or a car accident. Other changes are extreme, such as exposure to violence, and can lead to traumatic stress reactions.
How does stress affect the body?
Not all stress is bad. All animals have a stress response, which can be life-saving in some situations. The nerve chemicals and hormones released during such stressful times, prepares the animal to face a threat or flee to safety. When you face a dangerous situation, your pulse quickens, you breathe faster, your muscles tense, your brain uses more oxygen and increases activity—all functions aimed at survival. In the short term, it can even boost the immune system.
However, with chronic stress, those same nerve chemicals that are life-saving in short bursts can suppress functions that aren't needed for immediate survival. Your immunity is lowered and your digestive, excretory, and reproductive systems stop working normally. Once the threat has passed, other body systems act to restore normal functioning. Problems occur if the stress response goes on too long, such as when the source of stress is constant, or if the response continues after the danger has subsided.
How does stress affect your overall health?
There are at least three different types of stress, all of which carry physical and mental health risks:
- Routine stress related to the pressures of work, family and other daily responsibilities.
- Stress brought about by a sudden negative change, such as losing a job, divorce, or illness.
- Traumatic stress, experienced in an event like a major accident, war, assault, or a natural disaster where one may be seriously hurt or in danger of being killed.
The body responds to each type of stress in similar ways. Different people may feel it in different ways. For example, some people experience mainly digestive symptoms, while others may have headaches, sleeplessness, depressed mood, anger and irritability. People under chronic stress are prone to more frequent and severe viral infections, such as the flu or common cold, and vaccines, such as the flu shot, are less effective for them.
Of all the types of stress, changes in health from routine stress may be hardest to notice at first. Because the source of stress tends to be more constant than in cases of acute or traumatic stress, the body gets no clear signal to return to normal functioning. Over time, continued strain on your body from routine stress may lead to serious health problems, such as heart disease, high blood pressure, diabetes, depression, anxiety disorder, and other illnesses.
How can I cope with stress?
The effects of stress tend to build up over time. Taking practical steps to maintain your health and outlook can reduce or prevent these effects. The following are some tips that may help you to cope with stress:
- Seek help from a qualified mental health care provider if you are overwhelmed, feel you cannot cope, have suicidal thoughts, or are using drugs or alcohol to cope.
- Get proper health care for existing or new health problems.
- Stay in touch with people who can provide emotional and other support. Ask for help from friends, family, and community or religious organizations to reduce stress due to work burdens or family issues, such as caring for a loved one.
- Recognize signs of your body's response to stress, such as difficulty sleeping, increased alcohol and other substance use, being easily angered, feeling depressed, and having low energy.
- Set priorities-decide what must get done and what can wait, and learn to say no to new tasks if they are putting you into overload.
- Note what you have accomplished at the end of the day, not what you have been unable to do.
- Avoid dwelling on problems. If you can't do this on your own, seek help from a qualified mental health professional who can guide you.
- Exercise regularly-just 30 minutes per day of gentle walking can help boost mood and reduce stress.
- Schedule regular times for healthy and relaxing activities.
- Explore stress coping programs, which may incorporate meditation, yoga, tai chi, or other gentle exercises.
If you or someone you know is overwhelmed by stress, ask for help from a health professional. If you or someone close to you is in crisis, call the toll-free, 24-hour National Suicide Prevention Lifeline at 1-800-273-TALK (1-800-273-8255).
For More Information on Stress
For information on clinical trials
Information from NIMH is available in multiple formats. You can browse online, download documents in PDF, and order materials through the mail. Check the NIMH website for the latest information on this topic and to order publications. If you do not have Internet access, please contact the NIMH Information Resource Center at the numbers listed below.
National Institute of Mental Health
Science Writing, Press & Dissemination Branch
6001 Executive Boulevard
Room 8184, MSC 9663
Bethesda, MD 20892-9663
Phone: 301-443-4513 or
1-866-615-NIMH (6464) toll-free
TTY: 866-415-8051 toll-free
|
In physics, interference is the effect of wave functions. A single wave can interfere with itself, but this is still an addition of two waves (see Young's slits experiment). Two waves always interfere, even if the result of the addition is complicated or not remarkable.
Something that happens when two or more waves are in the same space. Sometimes the peak of one wave joins with the peak of another wave, so the resulting peak is twice as high. Sometimes the peak of one wave falls into the trough of another wave, and the surface is then flat. When waves add their effects, it is called positive interference, or constructive interference. When one wave subtracts from the effects of the other, it is called negative interference, or destructive interference.
If two people pushed on a car in the same direction, they would move the car better than either one alone. That would be positive interference. If two people of equal strength pushed the car from opposite directions, then it would not be moved by either of them. That would be negative interference.
Constructive interference[change | change source]
Constructive interference happens when two or more waves are in the same space and in phase. When this happens, the waves' amplitudes add together and the total is greater than the amplitude of any of the waves by themselves. This causes the waves to appear more intense.
At time = 0, one wave top is moving from the left and another wave is moving from the right.
At time = 1, the two wave tops meet in the middle.
At time = 2, the two waves have each continued moving forward and reappear again at their original heights.
Destructive interference[change | change source]
Destructive interference happens when two or more waves are in the same place and out of phase. When this happens, the waves' amplitudes add together and the total is less than the amplitude of any of the waves by themselves. This causes the waves to appear less intense.
At time = 0, a wave top is moving in from the left and a wave trough is moving in from the right.
At time = 1, the two waves have met in the middle. The crest (top) fills in the trough.
At time = 2, the two waves have moved along in their original direct and each reappears.
Examples of interference[change | change source]
After a rain, one can often see patterns when a little oil floats on top of puddles. The colors will be in the order: red, orange, yellow, green, blue, and violet. Each of these colors of light has its own wavelength, and different parts of the oil have different thicknesses. Part of the light from the sun bounces off the top surface, in other words it bounces off the oil. Part of the sunlight bounces off the top of the water. The light waves from the oil surface and the light from the water surface meet back in the air, and they interfere. For any thickness of the oil layer, some of the light waves will add and some will subtract, so the result is that one color of light will be strongest there.
When two highly polished sheets of glass are pressed together, sometimes the distance between the two pieces will change from place to place. When that happens, the pattern that shows up is called "Newton's rings." When slide photographs are put between two thin sheets of glass for showing in a slide projector, this kind of pattern is a big problem. The same problem can show up when two microscope slides are put together.
Physics questions[change | change source]
Here is a diagram of the kind of thing that produces some kinds of light interference. The distance between the top piece of glass and the bottom piece gets larger near the outside edges.
A simpler setup would be to have two flat pieces of glass in contact along one edge and a narrow angle between their two faces. If the separation between the top surface of the first piece of glass and the top surface of the second piece of glass is, at some point, such that light beams reflected from each are synchronized or in phase, then the reflected light will be bright, but if the two beams are half a cycle out of phase, then at that point the two beams will cancel each other and what reflected light there is will not be bright.
|
Animals that have hard body parts, such as bones or shells, are most likely to be preserved during fossilization. These rigid materials are more durable to the forces that cause fossilization; they are more likely to survive intact or to leave imprints that later become fossilized.
The conditions under which fossilization occurs are very specialized; only a fraction of a percent of the animals that have lived on Earth are fossilized. Typically, a fossil forms when animal remains are rapidly buried in materials that do not degrade the organic materials but instead maintain them in some form. Very dry regions preserve bone and other hard structures fairly well, and acidic peat bogs are ideal at pickling plant materials for millions of years.
Soft tissues, such as skin, cartilage, muscle or soft plant materials disappear rapidly, both through natural decomposition and as a result of scavenging. Consequently, the fossil record primarily records boned animals or shellfish. However, some conditions allow preservation of softer-tissued animals. Insects that become stuck in amber may be preserved, as the amber hardens and creates a protective environment. Deep freezing can also preserve soft tissue; the lack of oxygen and cold temperatures prevent the tissue from decomposing, leading to highly intact animal specimens even after millions of years. Mammoths and at least one ancient human have been found frozen in glacial ice because of this process.
|
1. The problem statement, all variables and given/known data On a frictionless surface, a 0.35 kg puck moves horizontally to the right (at an angle of 0°) and a speed of 2.3 m/s. It collides with a 0.23 kg puck that is stationary. After the collision, the puck that was initially moving has a speed of 2.0 m/s and is moving at an angle of −32°. What is the velocity of the other puck after the collision? 2. Relevant equations Momentum before= momentum after kinetic energy before= kinetic energy after AB=AB cos (x) 3. The attempt at a solution From Momentum before= momentum after, I got M1=first mass Vi1=initial velocity of the first mass Vf1=final velocity of the first mass...and I think you get the rest M1(Vi1-Vf1)=(M2)(Vf2) From the kinetic energy before= kinetic energy after, i got M1(Vi1^2-V1f^2)=(M2)(V2f^2) Now i'm kinda lost on what to do...some advice?
|
If our brains were computers, we’d simply add a chip to upgrade our memory. However, the human brain is more complex than even the most advanced machine, so improving human memory requires slightly more effort.
Certain areas of the brain are especially important in the formation and retention of memory: The hippocampus, a primitive structure deep in the brain, plays the single largest role in processing information as memory, in this area we can improve with HGH (human growth Hormone and Pregnenolone). The amygdala, an almond-shaped area near the hippocampus, processes emotion and helps imprint memories that involve emotion. The cerebral cortex, the outer layer of the brain, stores most long-term memory in different zones, depending on what kind of processing the information involves: language, sensory input, problem-solving, and so forth. In addition, memory involves communication among the brain’s network of neurons, millions of cells activated by brain chemicals called neurotransmitters.
Just like muscular strength, your ability to remember increases when you exercise your memory and nurture it with a good diet and other healthy habits. There are a number of steps you can take to improve your memory and retrieval capacity. First, however, it’s helpful to understand how we remember.
To be neurobic, an exercise should do one or more of the following:
1. Involve one or more of your senses in a novel context. You can use additional senses to do an ordinary task by blunting the sense normally used. For instance:
Get dressed for work or take a shower with your eyes closed. Eat a meal with your family in silence. Use only visual cues. or combine two or more senses in unexpected ways: Listening to a specific piece of music while smelling a particular aroma.
2. Engage your attention. To stand out from the background of everyday activities something has to be unusual, fun, surprising or evoke one of your basic emotions like happiness, love or anger: Go camping for the weekend. Take your child, spouse or parent to your work for the day.
3. Break a routine activity in an unexpected, novel way (novelty just for its own sake is not highly neurobic). Take a completely new route to work. Shop at a farmer’s market instead of a supermarket. Completely rearrange your office and desktop.
As suplement you should take NAC (N-acetyl-cisteine), R-Lipoic and Acetyl -Carnitine all of them increase Gluthation in your brain to protect you from free Radicals and an powerfull antioxidant, also you should check your levels of Testosterone, Estrogens, Human Growth Hormone and Pregnenolone,
FaceBook: Long Life Clinic,
Twitter: LongLifeClinic ,
Tel: +34 952 77 07 14
Dr.Jean Garant Mendoza MD
Master in Longevity and Biological Medicine by Madrid University.
Member of the American Academy of Anti-Aging Medicine (A4M)
This post is also available in: Spanish
|
To understand a concussion’s full effect on the brain, it is necessary to determine immediate consequences in addition to complications that could manifest a couple of years down the line. For example, a recent study published in the journal Biological Psychiatry revealed that a traumatic brain injury can lead to symptoms of depression through a delayed immune system response caused by brain cells on “high alert.”
"A lot of people with a history of head injury don't develop mental-health problems until they're in their 40s, 50s or 60s,” said lead author Jonathan Godbout, associate professor of neuroscience at Ohio State University and a researcher in the Institute for Behavioral Medicine Research. “That suggests there are other factors involved, and that's why we're looking at this two-hit idea — the brain injury being the first and then an immune challenge. It's as if one plus one plus one equals 15. There can be a multiplier effect."
According to the Centers for Disease Control and Prevention (CDC), around 1.7 million traumatic brain injuries are reported each year. Traumatic brain injuries contribute to 30 percent of all injury-related deaths in the U.S. Seventy-five percent of these injuries are thought to be a concussion. Both mild and severe traumatic brain injuries can cause short or long-term health complications in Thinking, Sensation, Language and Emotion.
Prof. Godbout and his colleagues examined these “high alert” brain cells, commonly known as microglia, in mice. Microglia are on high alert as a result of excessive inflammation that the brain experiences following a traumatic brain injury. Researchers isolated concussive brain injuries that were considered less severe, meaning the individual showed no complications a week after sustaining a concussion.
When compared against mice that had not suffered from a traumatic brain injury, injured mice experienced symptoms of depression a week into the study as well as 30 days later. After taking a closer look at the injured mice’s brain thirty days after it suffered the traumatic brain injury, researchers found that microglia were still on high alert due to neuroinflammation.
"If we had waited three, six or nine months, the symptoms probably would have gotten even worse," Prof. Godbout added. "The young adult mice that have a diffuse head injury basically recover to normal, but not everything is normal. The brain still has a more inflammatory makeup that is permissive to hyperactivation of an immune response. These results tell us the TBI mice are having an amplified and prolonged activation of microglia, and that was associated with development of depressive symptoms in the mice.”
Microglia are tasked with protecting the brain following a traumatic brain injury by producing proteins and other chemicals to fight an infection. If these cells are on high alert, a surplus of proteins and chemicals cause an inflammatory response in the brain. Thirty days into the study, mice affected by a traumatic brain injury were given lipopolysaccharide (LPS), a substance that initiates an immune response in animals. Three days after the LPS injection, injured mice continued to experience depressive symptoms including a loss of appetite, inability to connect with other mice, and showed signs of “giving up.”
|
This species, whose scientific name is Pseudorca crassidens (pseudo = false, orca = Latin for cetacean, e.g. marine mammals: whales and dolphins; crassidens = ‘thick-tooth’) is actually a member of the dolphin family, and is the only member of its genus. It was first described in 1846 and is the fourth largest dolphin in the world. The species is fairly widespread in its distribution; they have been observed in shallow water including the Mediterranean and Red Seas but are more common in the deeper tropical to temperate waters of the Atlantic, Pacific, and Indian Oceans. They are considered uncommon but there are no global population estimates. The United State (U.S.) National Marine Fisheries Service (NMFS) concluded that false killer whales were the least common of the 18 species of toothed whales and dolphins found in Hawaiian waters. Although not hunted commercially, they can be caught as bycatch and through other fishery interactions, such as the Hawaii longline fishery and bottomfish fishery off the Northwestern Hawaiian Islands. They are hunted in Indonesia, Japan, and the West Indies. In the US this species is listed as Endangered under the Endangered Species Act.
I first saw them this winter at Latitude 15N, 6 miles offshore of the Mexican coast of Oaxaca, when on board a boat operated by a local dive shop, emphasis on first saw, as with all my experiences on or under the sea, from Mexico to the Mediterranean to Vietnam, I had never seen them before. A large pod surrounded our boat so we were able to get as close as six feet. They are roughly 1/3 the size of killer whales, lack the white marks on the body, and have a similar shaped hook-like dorsal fin, but a blunt-shaped beak. At first I did not think they were dolphins, being used to seeing the narrow snout or beak of the bottlenose and other species. They are dark in color. Males are larger than the females at almost 20 feet (6 m), while females reach lengths of 15 feet (4.5 m). In adulthood, false killer whales can weigh approximately 1,500 pounds (700 kg).
Like killer whales and other toothed whales, they are predators and have been known to eat smaller dolphins. They feed during the day on fish and at night on cephalopods (octopus, cuttlefish). Other similarities include the production of few offspring, and slow maturation. Females reach sexual maturity around 10 years, males much later at 18 years. Breeding season lasts several months. Females ovulate once annually giving birth to a single calf following a 15-month gestation period, which is followed by lactation for one and a half to two years. They reproduce after approximately seven years. They are long-lived, to approximately 63 years.
They are social animals, and form strong bonds, so seeing a large group should not have been surprising. They are usually found in groups of ten to twenty that belong to much larger groups of up to 40 individuals. False killer whales are also found with other cetaceans, most notably bottlenose dolphins. To increase success of finding prey, these animals travel in broad swaths up to several miles wide. Sharing of food has been observed between individuals.
So keep your eyes open the next time you are on the lovely Pacific waters of the Oaxacan coast; you may be lucky enough to see these magnificent animals.
|
Many 12-year-old girls are quite concerned about their appearance, and weight plays a large role in their self-image. If you're concerned about your daughter being under- or overweight, make an appointment with her pediatrician. Remember, however, that weight is just one potential indicator of health; other more important factors play a role, too.
Weight and 12-Year-Old Girls
There isn't a specific weight that's considered normal for all 12-year-old girls. Instead, the Centers for Disease Control and Prevention has established a range of weights defined as normal. This range takes into account differences in how children grow. For example, between the ages of 6 and 12, and before puberty starts, children tend to gain between 4 and 7 pounds per year. So, a 12-year-old girl who only gains 4 pounds a year will weigh less than a girl of the same age who gains 7 pounds per year. Both, however, are considered healthy.
How Healthy Weight Is Assessed
Healthy weight guidelines are based on body mass index, or BMI, a number that's used to plot a girl's weight on the CDC's growth chart and is calculated using current height and weight. Based on BMI, a child is underweight if her BMI is less than the fifth percentile. A child is within the healthy range if her BMI is between the fifth and 85th percentile. Overweight children fall between the 85th and 95th percentile, and obese children have a BMI equal to or greater than the 95th percentile. A healthy weight for a 12-year-old girl, therefore, can generally fall anywhere between 65 and 120 pounds.
What Can Cause the Variations
Girls enter puberty at different ages, so girls who have already started that process by age 12 can weigh more than girls who haven't. However, the opposite can also be true. A study published in 2013 in "Pediatric Obesity" reports that having a higher BMI early in life can cause early puberty. Genetics plays a role in weight, as well. Heavier parents can pass those genes to their children, which means that a 12-year-old girl with larger parents might weigh more than her peers who have smaller parents and vice versa. Activity level influences weight, too. Girls who are physically active tend to weigh less than girls who are sedentary. Serious athletes, such as gymnasts, can have a low body weight because they burn so many calories practicing their sport. Bone density and muscle mass can also influence weight. Girls with less muscle mass tend to weigh less than girls with more muscle mass.
When Weight Isn't Normal
If your 12-year-old daughter is underweight, make an appointment with her pediatrician to create a plan to help her reach a healthy weight such as adding certain high-calorie foods to her diet. Her pediatrician might also run tests to detect an underlying health problem that's causing weight changes, especially if she eats a healthy diet and gets an appropriate amount of exercise. Speak with your child's doctor about eating disorders because this is another possible explanation.
If your daughter is overweight, ensuring that she gets at least an hour of exercise per day and that she eats healthy foods, such as fruits, vegetables and whole grains, can help her shed the excess weight. Don't fixate on her weight or force her to go on a diet, however, because that can impair her body image and increase the risk of eating disorders. Model appropriate eating and exercise habits, too, and your daughter is likely to follow your example.
- Centers for Disease Control and Prevention: About BMI for Children and Teens
- KidsHealth: Growth and Your 6-to 12-Year-Old
- Centers for Diease Control and Prevention: 2 to 20 Years: Girls Stature-for-Age and Weight-for-Age Percentiles
- KidsHealth: Fitness and Your 6- to 12-Year-Old
- Pediatrics: Link Between Body Fat and the Timing of Puberty
- Pediatric Obesity: Timing of puberty and physical growth in obese children: a longitudinal study in boys and girls
- Medioimages/Photodisc/Photodisc/Getty Images
|
Practicing Creative Thinking
Practice creative thinking by with the following techniques:
Brainstorm ideas to ask another question or suggest another calculation that can be made for this homework assignment.
Brainstorm ways you could work this homework problem incorrectly.
Brainstorm ways to make this problem easier or more difficult.
Brainstorm a list of things you learned from working this homework problem and what you think the point of the problem is.
Brainstorm the reasons why your calcualtions overpredict the conversion that was measured when the reactor was put on stream. Assume you made no numerical errors in your calculations.
"What if..." questions: The "What if..." questions are particularly effective when used with the Living Example Problems where one varies the parameters to explore the problem and to carry out sensitivity analysis. For example, what if someone suggested that you should double the catalyst particle diameter, what would you say?.
|
Runner’s knee (Patellofemoral pain syndrome) is a condition that’s characterized by pain and/or discomfort originating from the front of the knee and is usually caused from direct bone-to-bone contact of the kneecap with the femur. As the name suggests, runner’s knee is a common condition among runners, as the repetitive compression wears down the protective articular cartilage beneath the patella, leaving it vulnerable to direct contact with the femur. However, the truth is that anyone can develop runner’s knee at any given time in their life, regardless of their physical activity levels.
It’s important to note that runner’s knee may also be used to describe the pain and discomfort caused by irritation of the soft tissue around the knee. If the tissue become inflamed, for instance, the kneecap may swell to the point where it not longer maintains full flexibility; thus, creating runner’s knee. Runner’s knee is somewhat of a blanket term that’s used to describe pain or discomfort in the knee, and in many cases it’s accompanied by inflammation as a related symptom.
The single most common cause of runner’s knee is overuse. When the knee is pushed beyond its normal physical limits, the tendons and muscles may stretch to the point where it caused pain and inflammation. The knee will gradually begin to swell, resulting in a lower range of motion, while the kneecap (patella) may protrude outwards. Runners, basketball players, athletes and other people who place frequent pressure on their knees are considered high risk for developing runner’s knee.
Another common cause of runner’s knee is direct physical injury. Whether the injury is from playing sports, falling, automobile accidents, or any other type of physical trauma, a forceful impact against the knee may trigger the pain and discomfort that’s commonly associated with runner’s knee.
Runner’s knee is often confused or mistaken for a similar condition known as iliotibial band syndrome (ITBS). While these two conditions share some similarities, there’s on major difference between them: the origin of the pain. In runner’s knee, pain originates on the front of the kneecap, whereas pain originates on the sides in ITBS. The first step in properly diagnosing runner’s knee is to identify the exact location of the pain and discomfort, at which point the individual and his or her physician may try to narrow down the possible causes.
There are several preventive measures individuals can take to help reduce their risk of runner’s knee, one of which is to know, and obey, the body’s physical limits. Don’t push yourself to run or perform other physical activities if your body is telling you otherwise. If you’re experiencing minor pain, swelling or general discomfort in one or both of your knees, stop and take a break. The human body usually gives off signs to warn of potential injury, and these are just a few signs of the onset of runner’s knee.
Call or email the staff at AtlantaChiroAndWellness.com to schedule an appointment.
|
Maser, which began as an acronym for Microwave Amplification by Stimulated Emission of Radiation is a device (or situation) where atoms are excited and incoming radiation at a specific Wavelength causes Emission by the atoms of more radiation at the same wavelength. The potential for such stimulation was described by Einstein in 1917 and such devices amplifying Microwaves were developed in the 1950s.
Natural masers can occur in space in Molecular Clouds including water or other molecules (OH, CH3OH, CH2O, SiO). Water molecules excited in Star-Forming Regions (SFR) emit 22.0 GHz radiation. Very powerful masers also occur in Active Galactic Nuclei (AGN).
|
Some Terms Relating to Gas and Petroleum Storage Equipment and Supplies
Safe storage of solid and liquefied natural resources is important. Without proper storage, these solid and liquefied substances can contaminant nearby soil and water. Following are some term relating to the storage of gas, petroleum, and other potentially hazardous materials.
Cathodic protection – Cathodic protection prevents corrosion of a tank or other gas or petroleum storage device. Cathodic protection is created by altering the electric currents present in a tank system.
Corrosion – This is the breaking down of metal caused by contact with different environmental substances. Corrosion often refers to rust. The rate of corrosion can be lessened by properly protecting metal cylinders or bottles from natural conditions.
Galvanized – This is the coating of metal with zinc. While this layer will help protect the structure, it does not meet federal standards for storage of crude oil, gas, or other possible contaminants.
Impressed Current – This corrosion protection system alters the electric currents of a petroleum or gas storage unit. The introduced electric current is greater than the corrosive current, which protects the metallic surface from damage.
Tank Liner – Liners seal tanks so that they are able to hold gas, fuel or petroleum. Liners must meet noncorrosive standards as outlined by the government codes. These liners are typically made from synthetic materials that the crude oil or gas cannot penetrate or corrode.
Inventory Control – This is a record of inventory. By comparing material levels before and after delivery, leaks can be detected early.
Secondary Containment – This is an outer containment device, such as a basin, that will catch any leaked substances. Secondary containment devices are not alternatives to traditional storage devices.
Liquefied petroleum is typically transported and stored in a liquid state. Petroleum is made of propane and butane, and is generally used to provide heat for residential properties. It can also be used as an effective energy saving alternative to electricity for industrial use. Petroleum can also be used to fuel a variety of kinds of engines. Because petroleum is a gas under pressure, it needs to be handled, transported and stored with a certain amount of care taken. For safe storage, petroleum is generally put into gas and oil canisters or compressed gas tanks for safe keeping. Bottled gas that is under pressure is extremely flammable and dangerous. Special equipment is used to protect the cylinders, canisters and tanks for harmful combustion or leaks. These companies that specialize in the sale of liquefied petroleum may also sell other products, equipment of supplies to the crude oil and natural gas industries. Liquefied petroleum bottling equipment includes regulators, storage supplies and oil and gas cylinders. These specialty petroleum dealers may also sell industrial supplies, regulators and tanks. Do some research to find out more about companies that sell liquid petroleum bottled equipment. When you know your budget, you can therefore research accordingly. You can also research industry news, photos and customer reviews. You can also further research online to find local distributors in your area, safety compliance information and modern technological advancements. Wholesalers offer bulk discounts. Choose several liquid petroleum distributors and get contact information, hours, directions and rates.
|
2 dice (or number cubes as we call them in Georgia)
BEANO Worksheets (I give each student their own)
Dry Beans (I used black-eyed peas since they were the cheapest--these are great to have for BINGO)
I tell my students we are playing a game (insert excited students) and have them place their twelve beans on the front of the worksheet. I usually let them read the instructions on the front and figure it out! Then we play! You can roll the dice and say the sum, have a student do it (I opt for this option), or use a graphing calculator with the probability simulator program. When the dice are rolled by hand I will chart the frequency of the sums on the board for use later!
Then we complete the backside. You may have to explain filling in the sum chart for some and I usually will plot the box graph with them. Then answer the questions (it discusses probability and you can add more questions if you want to!) and play BEANO again!
The second time around and the questions prompt students to look at which sum (6, 7, 8) are most likely to come up and how they arranged the beans on their boards will noticeable change (see pictures below) or at least they should if they were paying attention. You can play a couple rounds and I usually have the winner be the next roller. The BEANO games played after the worksheet are way quicker then the game at the beginning of class!!! Also, keep track the frequency of the sums and you can use the theoretical chart from the worksheet for use of comparison of the experimental chart you keep track of in class. The more trials, the more like the theoretical the experimental will look like.
|
What is the flu?
Flu, or influenza, is an infection of the nose, throat, windpipe, lung airways, and muscles. It is not the same thing as the common cold.
What are the symptoms of the flu?
It may cause very high fevers, muscle pain, headache, fatigue, chills, and dry cough.
How long does it take to get over the flu?
Most people with the flu get better in seven to ten days; however, flu may lead to severe illness, especially in infants and the elderly.
How is the flu treated?
First, you need to contact your physician if you feel like you have influenza. He or she may prescribe an anti-viral medicine. Anti-viral medicines work best if given within 48 hours of the start of symptoms and can help reduce the severity and duration of the flu. You need to get plenty of rest and drink fluids. You can take TylenolŪ or AdvilŪ to relieve fever, headache, and body aches. You should not take aspirin.
How can I prevent becoming infected by the flu?
You can get vaccinated to prevent influenza. Physicians usually offer the vaccine in October or November. If you are over the age of 50, and have a chronic illness (such as asthma, emphysema diabetes, or heart disease), or have close contact with people who have a chronic illness, it is especially important to seek vaccination. You should avoid contact with people who have the flu virus. You should wash your hands often. Getting plenty of rest, eat well, and exercise to strengthen your immune system. You should avoid smoking as well.
|
Indian Ocean, Encyclopædia Britannica, Inc.© Spectrum Colour Library/Heritage-Imagesbody of salt water, covering approximately one-fifth of the total ocean area of the world. It is the smallest, youngest, and physically most complex of the world’s three major oceans. It stretches for more than 6,200 miles (10,000 km) between the southern tips of Africa and Australia and, without its marginal seas, has an area of about 28,360,000 square miles (73,440,000 square km). The Indian Ocean’s average depth is 12,990 feet (3,960 metres), and its deepest point, in the Sunda Deep of the Java Trench off the southern coast of Java, is 24,442 feet (7,450 metres).
The Indian Ocean is bounded by Iran, Pakistan, India, and Bangladesh to the north; the Malay Peninsula, the Sunda Islands of Indonesia, and Australia to the east; Antarctica to the south; and Africa and the Arabian Peninsula to the west. In the southwest it joins the Atlantic Ocean south of the southern tip of Africa, and to the east and southeast its waters mingle with those of the Pacific.
The question of defining the oceanic limits of the Indian Ocean is complicated and remains unsettled. The clearest border and the one most generally agreed upon is that with the Atlantic Ocean, which runs from Cape Agulhas, at the southern tip of Africa, due south along the 20° E meridian to the shores of Antarctica. The border with the Pacific Ocean to the southeast is usually drawn from South East Cape on the island of Tasmania south along the 147° E meridian to Antarctica. Bass Strait, between Tasmania and Australia, is considered by some to be part of the Indian Ocean and by others to be part of the Pacific. The northeastern border is the most difficult to define. The one most generally agreed upon runs northwest from Cape Londonderry in Australia across the Timor Sea, along the southern shores of the Lesser Sunda Islands and the island of Java, and then across the Sunda Strait to the shores of Sumatra. Between the island of Sumatra and the Malay Peninsula the boundary is usually drawn across the Singapore Strait.
There is no universal agreement on the southern limit of the Indian Ocean. In general (and for the purposes of this article), it is defined as extending southward to the coast of Antarctica. However, many—notably in Australia—consider the portion closest to Antarctica (along with the corresponding southern extensions of the Atlantic and Pacific) to be part of the Southern (or Antarctic) Ocean. Australians often call the entire expanse south of that continent’s south coast the Southern Ocean.
The Indian Ocean has the fewest marginal seas of the major oceans. To the north are the inland Red Sea and Persian Gulf. The Arabian Sea is to the northwest, and the Andaman Sea to the northeast. The large gulfs of Aden and Oman are to the northwest, the Bay of Bengal is to the northeast, and the Great Australian Bight is off the southern coast of Australia.
The Indian Ocean differs from the Atlantic and Pacific Oceans in several other respects. In the Northern Hemisphere it is landlocked and does not extend to Arctic waters or have a temperate-to-cold zone. It has fewer islands and narrower continental shelves. It is the only ocean with an asymmetric and, in the north, semiannually reversing surface circulation. It has no separate source of bottom water (i.e., the Indian Ocean’s bottom water originates outside its boundaries) and has two sources of highly saline water (the Persian Gulf and the Red Sea). Below the surface layers, especially in the north, the ocean’s water is extremely low in oxygen.
The origin and evolution of the Indian Ocean is the most complicated of the three major oceans. Its formation is a consequence of the breakup, about 150 million years ago, of the southern supercontinent Gondwana (or Gondwanaland); by the movement to the northeast of the Indian subcontinent (beginning about 125 million years ago), which began colliding with Eurasia about 50 million years ago; and by the western movement of Africa and separation of Australia from Antarctica some 53 million years ago. By 36 million years ago, the Indian Ocean had taken on its present configuration. Although it first opened some 125 million years ago, almost all the Indian Ocean basin is less than 80 million years old.
The oceanic ridges consist of a rugged, seismically active mountain chain that is part of the worldwide oceanic ridge system and still contains centres of seafloor spreading in several places. The ridges form an inverted Y on the ocean floor, starting in the upper northwest with the Carlsberg Ridge in the Arabian Sea, turning due south past the Chagos-Laccadive Plateau, and becoming the Mid-Indian (or Central Indian) Ridge. Southeast of Madagascar the ridge branches: the Southwest Indian Ridge continues to the southwest until it merges into the Atlantic-Indian Ridge south of Africa, and the Southeast Indian Ridge trends to the east until it joins the Pacific-Antarctic Ridge south of Tasmania. Most striking is the aseismic (virtually earthquake-free) Ninetyeast Ridge, which is the longest and straightest in the world ocean. First discovered in 1962, it runs northward along the 90° E meridian (hence its name) for 2,800 miles (4,500 km) from the zonal Broken Ridge at latitudes 31° S to 9° N and can be traced farther under the sediments of the Bay of Bengal. Other important meridional aseismic ridges include the Chagos-Laccadive, Madagascar, and Mozambique plateaus, which are not part of the global oceanic ridge system (see aseismic ridges).
The fracture zones of the Indian Ocean offset the axis of the oceanic ridges mostly in a north-south direction. Prominent are the Owen, Prince Edward, Vema, and Amsterdam fracture zones along the ridges, with the immense Diamantina Fracture Zone found to the southwest of Australia.
These are extinct submarine volcanoes that are conically shaped and often flat-topped. They rise abruptly from the abyssal plain to heights at least 3,300 feet (1,000 metres) above the ocean floor. In the Indian Ocean, seamounts are particularly abundant between Réunion and Seychelles in the Central Indian Basin and the Vening Meinesz group near Wharton Basin. Bardin, Kohler, Nikitin, and Williams seamounts are examples.
Ocean basins are characterized by smooth, flat plains of thick sediment with abyssal hills (i.e., less than 3,300 feet high) at the bottom flanks of the oceanic ridges. The Indian Ocean’s complex ridge topography led to the formation of many basins that range in width from 200 to 5,600 miles (320 to 9,000 km). From roughly north to south they include the Arabian, Somali, Mascarene, Madagascar, Mozambique, Agulhas, and Crozet basins in the west and the Central Indian (the largest), Wharton, and South Australia basins in the east.
The continental shelf extends to an average width of about 75 miles (120 km) in the Indian Ocean, with its widest points (190 miles [300 km]) off Mumbai (Bombay) and northwestern Australia. The island shelves are only about 1,000 feet (300 metres) wide. The shelf break is at a depth of about 460 feet (140 metres). Submarine canyons indent the steep slope below the break. The Ganges (Ganga), Indus, and Zambezi rivers have all carved particularly large canyons. Their sediment loads extend far beyond the shelf, form the rises at the foot of the slope, and contribute to the abyssal plains of their respective basins. The Ganges sediment cone is the world’s widest and thickest.
The Indian Ocean has the fewest trenches of any of the world’s oceans. The narrow (50 miles [80 km]), volcanic, and seismically active Java Trench is the world’s second longest, stretching more than 2,800 miles (4,500 km) from southwest of Java and continuing northward as the Sunda Trench past Sumatra, with an extension along the Andaman and Nicobar Islands. The portion of this system adjacent to Sumatra was the centre of a massive undersea earthquake in 2004 (magnitude 9.1) that affected some 600 miles (1,000 km) of the associated fault zone. A series of devastating tsunamis generated by the quake swamped coastal towns, particularly in Indonesia, and reached to the northern end of the Bay of Bengal and as far as the Indian Ocean’s western shores.
The immense load of suspended sediments from the rivers emptying into the Indian Ocean is the highest of the three oceans, and nearly half of it comes from the Indian subcontinent alone. These terrigenous sediments occur mostly on the continental shelves, slopes, and rises, and they merge into abyssal plains. Cones of thicknesses of at least one mile are found in the Bay of Bengal, the Arabian Sea, and the Somali and Mozambique basins. Wharton Basin off northern Australia has the oldest sediments. In the Ganges-Brahmaputra cone, sediments exceed seven miles in thickness and extend to latitude 10° S. Little sediment has accumulated along the southern Sunda Islands, probably because the Java Trench acts as a sediment trap; instead, silicic volcanic ash is found there. Brown and red clay sediments dominate in the deep sea between 10° N and 40° S away from islands and continents and are 1,000 feet thick. In the equatorial zone, an area of high oceanic productivity, calcareous and siliceous oozes are abundant. South of and beneath the Antarctic Convergence (roughly 50° S), another highly productive area, are diatomaceous algal oozes. Sediments are absent over a width of about 45 miles (70 km) on the oceanic ridge crests, and the flanks are only sparsely covered. The ocean floor is composed of basalt in various stages of alteration. The principal authigenic (ocean-formed) mineral deposits are phosphorites at depths of 130 to 1,300 feet (40 to 400 metres), ferromanganese crusts at depths of 3,300 to 8,200 feet (1,000 to 2,500 metres), ferromanganese nodules at depths greater than 10,000 feet (3,000 metres), and hydrothermal metalliferous sediments at the crests of the Carlsberg and Central Indian ridges.
Several well-defined coastal configurations are found in the Indian Ocean: estuaries, deltas, salt marshes, mangrove swamps, cliffs, coral reefs, and complexes of barrier islands, lagoons, beaches, and dunes. A particularly important estuarine system is the Hugli (Hooghly) complex, formed by three branches of the Hugli River on the Bay of Bengal near Kolkata (Calcutta). Pakistan combines one of the most tectonically active coasts in the world with the 120-mile- (190- km- ) wide Indus River delta, the mud flats and salty wastes of which often are flooded. The Indian subcontinent has the most extensive beach area (more than half of its coastline). Mangroves are found in most estuaries and deltas. The Sundarbans, the lower part of the Ganges River delta, contain the largest mangrove forests in the world. Coral reefs—in either fringing, barrier, or atoll form—are abundant around all the islands in the tropics and also are found along the southern coasts of Bangladesh, Myanmar (Burma), and India and along the eastern coast of Africa.
Nicholas Devore/Bruce Coleman Ltd.The Indian Ocean has few islands. Madagascar—the fourth largest island in the world—the Maldives, Seychelles, Socotra, and Sri Lanka are continental fragments. The other islands—including Christmas, Cocos, Farquhar, Prince Edward, Saint-Paul, and Amsterdam; the Amirante, Andaman and Nicobar, Chagos, Crozet, Kerguelen, and Sunda groups; and Comoros, Lakshadweep (Laccadive, Minicoy, and Amīndīvi islands), Mauritius, and Réunion—are of volcanic origin. The Andamans and Sundas are island arc–trench subduction systems, with the trench on the oceanic side of the arc.
The Indian Ocean can be subdivided into four general latitudinal climatic zones based on atmospheric circulation.
The first zone, extending north from latitude 10° S, has a monsoon climate (characterized by semiannual reversing winds). In the Northern Hemisphere “summer” (May–October), low atmospheric pressure over Asia and high pressure over Australia result in the southwest monsoon, with wind speeds up to 28 miles (45 km) per hour and a wet season in South Asia. During the northern “winter” (November–April), high pressure over Asia and low pressure from 10° S to northern Australia bring the northeast monsoon winds and a wet season for southern Indonesia and northern Australia. Although the southwest monsoon recurs regularly, it is characterized by great annual variability in the date of its onset and its intensity, neither of which can be accurately predicted. Monsoon dynamics are linked with the El Niño current anomaly and with the Southern Oscillation atmospheric pattern of the South Pacific Ocean. The region is subject to destructive cyclones that form over the open ocean and head for shore in a generally westward direction. These storms typically occur just before and after the southwest monsoon rains, with west-facing coasts generally being the most severely affected. The northwestern part of the region has the driest climate, with some areas receiving less than 10 inches (250 mm) of rainfall per year; conversely, the equatorial regions are the wettest, with an average of more than 80 inches (2,000 mm). Air temperature over the ocean in the summer is 77 to 82 °F (25 to 28 °C), but along the northeastern coast of Africa it drops to 73 °F (23 °C) as a result of upwellings of cold, deep water. The winter air temperature drops to 72 °F (22 °C) in the northern ocean, remaining almost unchanged along and south of the Equator. Cloudiness is 60 to 70 percent in summer and 10 to 30 percent in winter in the monsoon region.
The second zone, that of the trade winds, lies between 10° and 30° S. There, steady southeasterly trade winds prevail throughout the year and are strongest between June and September. Cyclones also occur east of Madagascar between December and March. In the northern part of the zone the air temperature averages 77 °F (25 °C) during the southern “winter” (May–October) and slightly higher the rest of the time; along the 30th parallel it is 61 to 63 °F (16 to 17 °C) in winter and 68 to 72 °F (20 to 22 °C) in the tropical “summer” (November–April). Warm ocean currents increase the air temperature by 4 to 6 °F (2 to 3 °C) in the western trade-wind zone over that in its eastern portion. Precipitation decreases from north to south.
The third zone lies in the subtropical and temperate latitudes of the Southern Hemisphere, between 30° and 45° S. In the northern part of the zone the prevailing winds are light and variable, while in the southern area moderate to strong westerly winds prevail. The average air temperature decreases with increasing southern latitude: from 68 to 72 °F (20 to 22 °C) down to 50 °F (10 °C) in the Austral summer (December– February) and from 61 to 63 °F (16 to 17 °C) to 43 to 45 °F (6 to 7 °C) in winter (June–August). Rainfall is moderate and uniformly distributed.
The fourth, or subantarctic and Antarctic, zone occupies the wide belt between 45° S and the continent of Antarctica. Steady westerly winds prevail, reaching gale force at times with their passage through deep Antarctic low-pressure zones. The average winter air temperature varies from 43 to 45 °F (6 to 7 °C) in the north to 3 °F (-16 °C) near the continent. The corresponding summer temperatures vary within the limits of 50 to 25 °F (10 to -4 °C). Precipitation is frequent and decreases in quantity southward, with snow common in the far south.
The hydrological characteristics of the Indian Ocean are derived from the interaction of atmospheric conditions (rain, wind, and solar energy) with the surface, the sources of its water, and the deep (thermohaline) circulation, all of which combine to form generally horizontal layers of water. Each layer has different temperature and salinity combinations that form discrete water masses of different densities, with lighter water overlying denser water. Surface-water temperature varies with season, latitude, and surface circulation; surface salinity is the balance between precipitation, evaporation, and river runoff.
Ocean surface circulation is wind-driven. In the monsoon zone, surface circulation reverses every half year and features two opposing gyres (i.e., semi-closed current systems exhibiting spiral motion) that are separated by the Indian subcontinent. During the northeast monsoon, a weak counterclockwise gyre develops in the Arabian Sea, and a strong clockwise gyre forms in the Bay of Bengal. During the southwest monsoon, the current reverses direction in both seas, with warm- and cold-core eddies forming in the Arabian Sea. South of Sri Lanka, during the northeast monsoon, the North Equatorial Current flows westward, turns south at the coast of Somalia, and returns east as the Equatorial Countercurrent between 2° and 10° S. An equatorial undercurrent flows eastward at a depth of 500 feet (150 metres) at this time. During the southwest monsoon, the North Equatorial Current reverses its flow and becomes the strong east-flowing Monsoon Current. Part of the South Equatorial Current turns north along the coast of Somalia to become the strong Somali Current. A pronounced front, unique to the Indian Ocean, at 10° S, marks the limit of the monsoon influence.
South of the monsoon region, a steady subtropical anticyclonic gyre exists, consisting of the westward-flowing South Equatorial Current between 10° and 20° S, which divides as it reaches Madagascar. One branch passes to the north of Madagascar, turns south as the Mozambique Current between Africa and Madagascar, and then becomes the strong, narrow (60 miles [95 km]) Agulhas Current along South Africa before turning east and joining the Antarctic Circumpolar Current south of 45° S; the other branch turns south to the east of Madagascar and then curves back to the east as the South Indian Current at about 40° to 45° S. The current system at the eastern boundary of the ocean is undeveloped, but the West Australian Current flowing north from the South Indian Current closes the gyre to a certain extent. Only the Antarctic Circumpolar Current reaches the ocean floor. The Agulhas Current extends down to about 4,000 feet (1,200 metres) and the Somali Current to about 2,600 feet (800 metres); the other currents do not penetrate farther than 1,000 feet (300 metres).
Below the influence of the surface currents, water movement is sluggish and irregular. Two sources of highly saline water enter the Indian Ocean via the Arabian Sea, one from the Persian Gulf and the other from the Red Sea, and sink below the fresher surface water to form the North Indian High Salinity Intermediate Water between 2,000 and 3,300 feet (600 and 1,000 metres). This layer spreads east into the Bay of Bengal and as far south as Madagascar and Sumatra. Below this layer is the Antarctic Intermediate Water to about 5,000 feet. Between 5,000 and 10,000 feet (1,500 and 3,000 metres) is the North Atlantic Deep Water (named for the source of this current), and below 10,000 feet is Antarctic Bottom Water from the Weddell Sea. These cold, dense layers creep slowly northward from their source in the Antarctic Circumpolar Region, becoming nearly anoxic (oxygen-deficient) en route. Unlike the Atlantic and Pacific oceans, the Indian Ocean has no separate source of bottom water.
Upwelling is a seasonal phenomenon in the Indian Ocean because of the monsoon regime. During the southwest monsoon, upwelling occurs off the Somali and Arabian coasts and south of Java. It is most intense between 5° and 11° N, with replacement of warmer surface water by water of about 57 °F (14 °C). During the northeast monsoon, strong upwelling occurs along the western coast of India. Midocean upwelling takes place at that time at 5° S, where the North Equatorial Current and the Equatorial Countercurrent run alongside each other in opposite directions.
A zonal asymmetry is noted in the surface-water temperature distribution in summer north of 20° S. Summer surface temperatures are higher in the eastern part of this region than in the west. In the Bay of Bengal the maximum temperature is around 82 °F (28 °C). The minimum temperature is about 72 °F (22 °C) in the area of Cape Gwardafuy (Guardafui), on the Horn of Africa, and is associated with the upwelling off the African coast. South of 20° S the temperature of the surface waters decreases at a uniform rate with increase in latitude, from 72 to 75 °F (22° to 24 °C) to 30 °F (-1 °C) near Antarctica. Near the Equator, northern winter surface-water temperatures in excess of 82 °F (28 °C) are encountered in the eastern part of the ocean. Winter surface temperatures are about 72 to 73 °F (22 to 23 °C) in the northern portion of the Arabian Sea, and 77 °F (25 °C) in the Bay of Bengal. At 20 °S the temperature is about 72 to 75 °F (22 to 24 °C); at the 40th parallel, 57 to 61 °F (14 to 16 °C); and at the coast of Antarctica, 30 to 32 °F (-1 to 0 °C).
Overall, the salinity of Indian Ocean surface waters varies between 32 and 37 parts per thousand, with large local differences. The Arabian Sea has a dense, high-salinity layer (37 parts per thousand) to a depth of about 400 feet (120 metres) because of high evaporation rates at subtropical temperatures with moderate seasonal variations. Salinity in the surface layer of the Bay of Bengal is considerably lower, often less than 32 parts per thousand, because of the huge drainage of fresh water from rivers. High surface salinity (greater than 35 parts per thousand) is also found in the Southern Hemisphere subtropical zone between 25° and 35° S; while a low-salinity zone stretches along the hydrological boundary of 10° S from Indonesia to Madagascar. Antarctic surface-water salinity generally is below 34 parts per thousand.
Ice is formed in the extreme south during the Antarctic winter. Between January and February the melting ice along the Antarctic coast is broken up by severe storms and, in the form of large blocks and broad floes, is carried away by wind and currents to the open ocean. In some coastal areas the tongues of ice-shelf glaciers break off to form icebergs. West of the 90° E meridian the northern limit for floating ice lies close to 65° S. To the east of that meridian, however, floating ice is commonly encountered to 60° S; icebergs are sometimes found as far north as 40° S.
Examples of all three tidal types—diurnal, semidiurnal, and mixed—can be found in the Indian Ocean, although semidiurnal (i.e., twice daily) are the most widespread. Semidiurnal tides prevail on the coast of eastern Africa as far north as the Equator and in the Bay of Bengal. The tides are mixed in the Arabian Sea and the inner part of the Persian Gulf. The southwestern coast of Australia has a small area of diurnal (daily) tides, as do the coast of Thailand in the Andaman Sea and the south shore of the central Persian Gulf.
Tidal ranges vary considerably from place to place in the Indian Ocean and its adjacent seas. Port Louis, Mauritius, for instance, has a spring tidal range of only 1.6 feet (0.5 metres), characteristic of islands in the open ocean. Other areas with low tidal ranges are Chennai (Madras), India (4.3 feet [1.3 metres]) and Colombo, Sri Lanka (2.3 feet [0.7 metres]). The greatest tidal ranges are found in the Arabian Sea, notably at Bhavanagar, India, in the Gulf of Khambat (38 feet [11.6 metres]), and in the Gulf of Kachchh at Navlakhi (25.5 feet [7.8 metres]). High tidal ranges are also found in the eastern ocean; Sagar Island in India, at the head of the Bay of Bengal, has a range of 17.4 feet (5.3 metres), and for Yangon (Rangoon), Myan., at the north end of the Andaman Sea, the range is 18.4 feet (5.6 metres). Moderate ranges are found at Durban, S.Af., and Karachi, Pak. (both about 7.5 feet [2.3 metres]), and the Shatt Al-’Arab, Iraq (11.3 feet [3.4 metres]).
© Piergiorgio Sclarandis/Black StarBy far the most valuable mineral resource is oil, and the Persian Gulf is the largest oil-producing region in the world. Exploration for offshore petroleum and natural gas also has been under way in the Arabian Sea and the Bay of Bengal, both of which are believed to have large reserves. Other sites of exploration activity are off the northwestern coast of Australia, in the Andaman Sea, off the coast of Africa south of the Equator, and off the southwestern coast of Madagascar. Other than the countries of the Persian Gulf, only India produces commercial quantities of oil from offshore areas, with a large proportion of its total production coming from fields off the coast of Mumbai. Some natural gas also is produced from fields off the northwestern coast of Australia.
Another potentially valuable mineral resource is contained in manganese nodules, which abound in the Indian Ocean. Sampling sites throughout the central part of the ocean, as far south as South Africa, and east in the South Australian Basin have yielded nodules; the manganese content has been highest in the east and lowest toward the northwest. The difficulty in mining and processing these minerals, despite advances in technology, has precluded their commercial extraction. Other minerals of potential commercial value are ilmenite (a mixture of iron and titanium oxide), tin, monazite (a rare earth), zircon, and chromite, all of which are found in nearshore sand bodies.
The greater part of the water area of the Indian Ocean lies within the tropical and temperate zones. The shallow waters of the tropical zone are characterized by numerous corals and other organisms capable of building, together with calcareous red algae, reefs and coral islands. These coralline structures shelter a thriving marine fauna consisting of sponges, worms, crabs, mollusks, sea urchins, brittle stars, starfish, and small but exceedingly brightly coloured reef fish.
The major portion of the tropical coasts is covered with mangrove thickets with an animal life specific to that environment. Mangroves act to stabilize the land along the coastal margin and are important breeding and nursery grounds for offshore species.
Small crustaceans, including more than 100 species of minute copepods, form the bulk of the animal life, followed by small mollusks, jellyfish, and polyps, and other invertebrate animals ranging from single-celled radiolaria to large Portuguese man-of-war, the tentacles of which may reach a length of some 165 feet (50 metres). The squid form large schools. Of the fishes, the most abundant are several species of flying fish, luminous anchovies, lantern fish, large and small tunnies, sailfish, and various types of sharks. Here and there are found sea turtles and large marine mammals, such as dugongs (or sea cows), toothed and baleen whales, dolphins, and seals. Among the birds, the most common are the albatross and frigate birds, and several species of penguins populate the islands lying in the ocean’s temperate zone and the Antarctic coast.
The upwellings that occur in several coastal regions of the Indian Ocean—particularly in the northern Arabian Sea and along the South African coast—cause nutrients to concentrate in surface waters. This, in turn, produces immense quantities of phytoplankton that are the basis for large populations of commercially valuable marine animals. Despite great fishery potentials, however, most commercial fishing is done by small-scale fishermen at lower depths, while deep-sea resources (with the exception of tuna) remain poorly fished.
The principal coastal species—shrimp, croakers, snappers, skates, and grunts—are caught by littoral countries, while pelagic fish of higher value—including species of tuna and tunalike species such as billfish that are found in tropical and subtropical waters—are taken mostly by the world’s major fishing nations (e.g., Japan, South Korea, and Russia). Shrimp is the most important commercial species for coastal countries, with India accounting for the largest catch. Lesser quantities of sardines, mackerel, and anchovies also are exploited by littoral states. Since coastal nations now can claim sovereignty over resources within an exclusive economic zone that extends 200 nautical miles (230 statute miles, or 370 km) from their coasts, it has become possible for small countries such as the Maldives to increase national income by selling fishing rights in their zones to the major fishing nations that have the capital and technology to exploit pelagic resources.
APThe economic development of the littoral countries since the mid-20th century has been uneven, following attainment of independence by most states. The formation of regional trade blocs led to an increase in sea trade and the development of new products. Most Indian Ocean states have continued to export raw materials and import manufactured goods produced elsewhere, with a few exceptions like Australia, India, and South Africa. Petroleum dominates commerce, as the Indian Ocean has come to be an important throughway for transport of crude oil to Europe, North America, and East Asia. Other major commodities include iron, coal, rubber, and tea. Iron ore from Western Australia and from India and South Africa is shipped to Japan, while coal is exported to the United Kingdom from Australia via the Indian Ocean. Processed seafood has emerged as a major export item from the littoral states. In addition, tourism has grown in importance on many of the islands.
Shipping in the Indian Ocean can be divided into three components: dhows, dry-cargo carriers, and tankers. For more than two millennia the small, lateen-rigged sailing vessels called dhows were predominant. The dhow trade was particularly important in the western Indian Ocean, where these vessels could take advantage of the monsoon winds; a great variety of products were transported between ports on the coast of East Africa and ports on the Arabian Peninsula and on the west coast of India (notably Mumbai, Mangalore, and Surat). Most dhow traffic has been supplanted by larger, powered ships and by land transport, and the remaining dhows have been equipped with auxiliary engines.
Much of the Indian Ocean’s dry-cargo shipping is now containerized. Most container ships enter and exit the Indian Ocean via the Cape of Good Hope, the Suez Canal and Red Sea, and the Strait of Malacca. South Africa and India have their own merchant fleets, but most of the other littoral states have only a few merchant vessels and depend on the ships of other countries to carry their cargoes. Most other dry cargo is transported by bulk carriers, mainly those used to carry iron ore from India, southern Africa, and western Australia to Japan and Europe. An important route from western Australia is via the Sunda Strait and the South China Sea to Japan. Major ports of the Indian Ocean include Durban (S.Af.), Maputo (Mozam.), and Djibouti (Djib.) along the African coast; Aden (Yemen) on the Arabian Peninsula; Karachi, Mumbai, Chennai, and Kolkata on the Indian subcontinent and Colombo in Sri Lanka; and Melbourne, Port Adelaide Enfield, and Port Hedland in Australia.
Tanker traffic moves primarily from ports in the Persian Gulf across the northern Indian Ocean to the Strait of Malacca and from the Persian Gulf south along the coast of Africa and around the Cape of Good Hope. The route via the Suez Canal has become far less important as the size of tankers has surpassed the canal’s capacity; the size of these tankers, however, compensates for the longer distances now required to move oil from the Persian Gulf to Europe. The largest tankers must now use the Lombok Strait through the Lesser Sunda Islands to carry oil to Japan, since their drafts are too great for the route through the Malacca and Singapore straits.
European colonial exploitation of Indian Ocean resources resulted in the first clear evidence of the degradation of both the terrestrial and oceanic environments. Deforestation, cultivation, and guano mining have had undesirable effects on terrestrial ecosystems. Guano mining, which removed vegetation and scraped the land surface, has caused the destruction of much native flora and fauna, and hunting and the introduction of exotic species have altered the ecological balance that previously existed. Man-made threats to the oceanic environment are of more recent origin. One is the quantity of domestic and industrial waste that has accumulated in nearshore waters as a result of increased urbanization and industrialization along the coast. This has been most evident in India, which is the most populous country of the region. Another is the concern caused by the transport of large quantities of crude oil across the ocean and its adjacent semienclosed seas. Oil spills from normal tanker operations and occasional large-scale tanker catastrophes have had deleterious effects on phytoplankton and zooplankton, both necessary parts of the food chain of commercial fisheries. The East African coast, the Arabian Sea, and the approaches to the Strait of Malacca are areas in which the threat of oil pollution and major phytoplankton productivity coincide.
There is evidence that the Egyptians explored the Indian Ocean as early as about 2300 bc, when they sent maritime expeditions to the “land of Punt,” which was somewhere on the Somali coast. The expeditions, which may have begun even earlier—perhaps about 2900 bc, were numerous until about 2200 bc. Egyptian annals make no mention of journeys to Punt during the period 2200–2100 bc, but they began again in the 11th dynasty (2081–1938 bc), and records mention them continuously until the 20th dynasty (1190–1075 bc).
Encyclopædia Britannica, Inc.Early trade in the northwestern Indian Ocean was aided by an irrigation canal (navigable in high water) through the Isthmus of Suez that was built by the Egyptians during the 12th dynasty (1938–c. 1756 bc) and operated almost continuously until it was filled in ad 775. Early seafarers made good use of their knowledge of the monsoons and their associated currents; Arab sailors in their lateen-rigged dhows traded along the East African coast as far south as Sofala (present-day Nova Sofala, Mozam.) and north into the Red Sea and Persian Gulf. The writings of medieval Arab and Persian pilots from the 9th to the 15th century include detailed sailing instructions and information on navigation, winds, currents, coasts, islands, and ports from Sofala to China. It was on an Indian trading vessel that the Russian voyager Afanasy Nikitin reached India in 1469. Vasco da Gama, sailing around Africa in 1497, signed on an Arabian pilot at Malindi before he crossed the Indian Ocean to reach the western shores of India.
The Dutch, English, and French followed the Portuguese to the Indian Ocean. In 1521 the Spanish navigator Juan Sebastián del Cano crossed the central part of the ocean, continuing the first voyage of circumnavigation of the globe after the death of the original commander, Ferdinand Magellan, in the Philippine Islands. The Dutch navigator Abel Tasman, pursuing voyages of discovery in the eastern Indian Ocean from 1642 to 1644, explored the northern coast of Australia and discovered the island of Tasmania. The southern waters of the Indian Ocean were explored by James Cook in 1772. Beginning in 1806 the Indian Ocean was crossed repeatedly by Russian ships commanded by Adam Johann Krusenstern, Otto von Kotzebue, and others.
Between 1819 and 1821 the expedition of the Russian explorer Fabian Gottlieb von Bellingshausen that circumnavigated Antarctica penetrated the Indian Ocean south of the 60th parallel. A number of important voyages to Antarctica followed in the 19th and early 20th centuries, led by the explorers Charles Wilkes (American), Jules-Sébastien-César Dumont d’Urville and Jean-Baptiste-Étienne-Auguste Charcot (French), James Clark Ross (Scottish), and others.
The famous round-the-world expedition of the British naval vessel Challenger, which began in 1872, marked the beginning of systematic investigation of the oceans, including the Indian Ocean. Thereafter, numerous expeditions were mounted.
Other circumnavigational voyages were made following World War II by the Danish Galathea, the Swedish Albatross, and the British Challenger II, which explored the northern portion of the Indian Ocean. During the preparation and execution of the International Geophysical Year (1957–58) and in subsequent years, scientific explorations of the southern Indian Ocean were carried out by Australian, New Zealand, Soviet, French, Japanese, and other expeditions. The International Indian Ocean Expedition (1960–65) was a cooperative effort by some three dozen research ships of many countries.
Research activity since then has built on the work of that expedition, with studies on the nature of monsoons. Several ships have crossed the Indian Ocean to collect information on mineral resources of the continental shelves and the deep ocean floor. Several legs of the Deep Sea Drilling Project (1968–83) were in the Indian Ocean. These more recent and technologically advanced scientific explorations have provided insights into the marine geology, geophysics, and resource potentials of the Indian Ocean.
|
Has the Mystery of How the Pyramids Were Made Been Solved? The Pyramids at the Giza plateau in ancient Egypt are considered by many to be the most incredible ancient structures ever created on the face of the earth. The Great Pyramid of Giza, although built thousands of years ago, continues to astound specialists who cannot seem to understand how it was managed by ancient man to construct such a creation without modern day tools that make building structures much easier.
Some of the seemingly infinite amazing aspects about the Great Pyramid of Giza include:
- "Radioactive sand" has been discovered by experts by the Queens chambers.
- The erection holds enough stone to create an almost 2-feet-high wall around the whole planet.
- The relationship between Pi (p) and Phi (F) is conveyed in the fundamental proportions of the Great Pyramid.
- Hieroglyphics were never found inside of the Pyramids.
How did the workers manage to build such a flawless structure? What was their reasoning behind making such big pyramids? How did people thousands of years ago possibly manage to transport, raise and place massive blocks of stone that weigh over 50 tons? As reported by the most established adaptations, the Pyramids were built by colossal armies of builders. Greek Philosopher and historian Herodotus was the first in history to mention this theory.
He claimed that the Pyramids at Giza were manufactured by leagues of 100,000 men, which varied month-to-month, for a period of 20 years. However, for this to be factual, it would mean that one block of stone had to be perfectly placed into position ever 3 ½ minutes, 24 hours a day. One of the greatest misconceptions that have inevitably led to numerous misinterpretations about Ancient Egyptian society, their culture, and origins, is the idea that of mummy findings inside the Pyramid.
Not a single mummy has been found by experts inside the Pyramid. It is a proven fact that The Great Pyramid of Giza does not hold a Pharaoh's mummy, and nothing uncovered inside the Pyramid suggests that there ever was one. The weight of the pyramid is predicted at 5,955,000 tons. Multiplied by 10^8 gives a plausible prediction of the earth’s mass. As of today, not one single expert has been able to answer three of the most fundamental questions surround the Pyramids at Giza which are: Who constructed them, why were they constructed, and maybe how they were constructed? Nonetheless, a documentary on YouTube may shed light on how these amazing creations were assembled thousands of years ago.
The documentary talks about the processes used to construct the Pyramids. How could materials such as wooden rollers and mud bricks take the pressure put on them by tons of stone? Thousands of men allegedly dragged the building blocks across the desert in the excruciating heat – how did the ancient project managers keep spirits up amongst the dust-covered and exhausted workers?
|
A new study may help mankind understand the gravity of climate change.
West Antarctica has lost so much ice between 2009 and 2012 that the gravity field over the region dipped, according to an announcement Friday from the European Space Agency (ESA). The conclusion is based on high-resolution measurements from satellites that map Earth’s gravity.
The gravitational fluctuation over the Antarctic Peninsula is small, but it’s further evidence that melting ice is fundamentally changing parts of the planet.
Measuring Earth’s Gravity
Scientists combined measurements from the ESA’s GOCE satellite and the lower-resolution Grace satellite, which is operated by the United States and Germany. Both satellites take detailed measurements of Earth’s gravity field and how the planet’s mass is distributed. Data from these instruments help scientists better understand the structure of Earth’s interior and its atmosphere.
Earth’s gravity field fluctuates from place to place depending on the planet’s rotation and the presence of mountains or ocean trenches.
Based on measurements from these two satellites, West Antarctica’s gravitational pull measurably decreased over three years because of its lost mass. Although you won’t feel a difference in the planet under your feet, the findings are further proof that, yes, ice is melting in Antarctica.
The findings from GOCE and Grace gel with data from a separate mission, ESA’s CryoSat satellite, which carries a radar altimeter (a device that uses radio waves to map the terrain’s altitude). It found that the rate at which ice is lost on the West Antarctic Ice Sheet has increased by a factor of three since 2009. The frozen continent has been shrinking by 77 square miles every year, according to CryoSat data.
Is there gravitational pull in the earths atmosphere?
There is gravitational pull in the earth's atmosphere because of the actual weight of the earth's atmosphere. Atmospheric (MORE?)
|
Applied to styles of architecture derived from those of the motherland in a colony. American Colonial is a modification of the English Georgian or Queen Anne styles, of particular interest because very often pattern-book designs were re-interpreted for timber-framed structures, or otherwise altered, often by very subtle means. Although originally associated with the original thirteen British colonies in North America, the essentials of American Colonial architecture were often revived well into C20 all over the USA. Colonial Revival is a term given to architecture of the late C19 and early C20, especially in the USA, South Africa, and Australia. Attention had been drawn to the qualities of colonial architecture in various publications from the 1840s, and several writers advocated its revival, the catalyst for which was the Centennial International Exhibition, Philadelphia, PA (1876), at which the New England Log House and the Connecticut House attracted particular attention, as did two half-timbered buildings (the British Executive Commissioner and Delegate Residence) by the British Rogue, Thomas Harris (1830–1900), which encouraged an interest in vernacular architecture. The Colonial Revival was taken up by Peabody & Stearns (e.g. Denny House, Brush Hill, Milton, MA (1878), and the influential Massachusetts Building for the World's Columbian Exposition, Chicago, IL (1893—which was the model for a nation-wide revival) ). Firms such as McKim, Mead, & White designed in both Colonial Revival and Shingle styles. A fine example of the Colonial Revival in the USA is the clap-boarded Mary Perkins Quincy House, Litchfield, CT (1904), by John Mead Howells and I. N. Phelps Stokes (1867–1944). American Colonial Revival influenced some developments elsewhere, e.g. Lutyens's work at Hampstead Garden-Suburb, London (designed 1908–10), and de Soissons's designs (from the 1920s) at Welwyn Garden City, Herts. Two further variations of Colonial Revival evolved on the West Coast of the USA: Mission Revival (from the 1890s) and Spanish Colonial Revival (from just after the 1914–18 war). A good example of the latter style is Sherwood House, La Jolla, CA (1925–8), by George Washington Smith (1876–1930—who could turn his hand to villas in medieval, Islamic, and Mediterranean modes as well).
In Australia, following the creation of a unified country at the beginning of C20, a need for a national style was urged, and the late-Georgian domestic architecture of Australia was selected as offering suitable models. The main practitioners of the Australian Colonial Revival (featuring colonnaded verandahs, sash-windows with shutters, and fanlights over doors) were W. H. Wilson (e.g. Eryldene, Gordon, Sydney (1913–14) ), Robin Dods (1868–1920—e.g. several fine houses in Brisbane), and Leslie Wilkinson (1882–1973—who mixed Mediterranean features in with Australian Colonial elements, e.g. ‘Greenway’, Vaucluse, Sydney (1923) ). In South Africa, the so-called Dutch Colonial or Cape Dutch style, which had developed from C17, was revived by Baker at Groote Schuur, Rondebosch (1893–8—built for Cecil John Rhodes (1853–1902) ), and was quickly adopted by other South African architects. Spanish Colonial was also revived in Latin America as well as in the USA, and both it and Dutch Colonial evolved as separate styles from those found in Spain and The Netherlands. The Colonial Revival has enjoyed further revivals and interpretations at the end of C20 and the beginning of C21.
|
|Spaulding, N.E.; Namowitz, S.N. Heath. Earth Science. Massachusetts, DC Heath and Company, 1994.||"The moon's density, about 3.3g/cm3 is less than Earth's and it's mass is only about one eightieth Earth's."||7.475 × 1022 kg|
|New Book of Knowledge, Connecticut, Grolier Inc., 2001.||"Mass: 74 quitillion tons about 1/81 the mass of Earth."||7.383 × 1022 kg|
|"The Moon." Time Life Space and Planets.||"Only one fourth of the diameter of Earth with just 1.25 percent of its mass, the Moon is a captive of Earth's gravitational force."||7.475 × 1022 kg|
|The Moon. Nine Planets. University of Arizona.||"mass: 7.35 e22 kg"||7.35 × 1022kg|
|Moon. Nasa Solar System Exploration.||"Mass: 7.3483 e 25 g"||7.34 × 1022 kg|
Scientists still argue about how the moon came into existence. Some say that it formed with the Earth and then broke off after colliding with another body. Others say that it was captured by Earth's gravitation. The moon is Earth's satellite, and one of the only satellites which is the closest in size to the planet that it orbits.
The moon has a diameter of 3476 km and orbits the earth at a distance of 384,400 km at a speed of 3600 km/hr. The moon has 8 different phases within its period which lasts 29.5 days. Ocean tides are an effect of the gravitational forces between the Earth and the moon. The moon has no global magnetic field and no atmosphere. It's gravity is about a sixth of the Earth's making it easier to launch a spacecraft from the moon. The moon's layered structure (the crust, mantle, molten zone and core) was developed from recordings of moonquakes. The moon's surface is composed mainly of craters.
In calculating the results, the mass of the Earth used was 5.98 × 1024 kg.
Ada Li -- 2002
|Hewitt, Paul G. Conceptual Physics. Addison-Wesley, 1987: Table B-1.||"Mass of the Earth = 5.98 × 1024 kg"
"Mass of the moon = 7.36 × 1022 kg"
|7.36 × 1022 kg|
|Lide, David R. Handbook of Chemistry and Physics 81st edition. 2000-2001: 14-3.||
|7.3483 × 1022 kg|
|Moore, Patrick & Tirion, Wil. Cambridge Guide to Stars and Planets. Reed International, 1993: 25.||"The moon has one-quarter the diameter and 0.012 the mass of the Earth."||7.2 × 1022 kg|
|Giancoli, Douglas C. Physics: Principles with Applications., 1980: inside front cover.||"Moon: Mass 7.4 × 1022 kg"||7.4 × 1022 kg|
|McGraw Hill Encyclopedia of Science and Technology. McGraw Hill, vol. 11. 418.||"Mass 1/81.301 Earth's mass, or 1.6 × 1023 lb (7348 × 1022 kg)"||7.348 × 1022 kg|
The moon is the only natural satellite of the Earth. The moon takes about 27 days, 7 hours, and 43 minutes to complete one revolution of its elliptical orbit around the Earth. The moon is 384,400 km (238,857 miles) away from the Earth. It moves around the Earth at an average speed of 3700 km/h (2300 mph). Luna, better known as the moon, is only one-fourth the size of the Earth, having a diameter of 3,476 km (2,318 miles); which is less than the distance across the United States. The volume of the moon is about 1/50 of the Earth. The Earth and its moon could be considered twin planets compared to huge planet like Jupiter and Saturn, which are 40 or 50 times larger than their biggest moons. The moon's mass is about 0.012 times that of the Earth (7.35 × 1022 kg). The moon's gravity is one-sixth that of the Earth. This explains why astronauts are unable to walk on the moon as they do on the Earth.
The origin of the Moon is uncertain, but the leading theory that explains the origin of the moon is known as "planetessimal impact." According to this theory, a "Mars-sized" body hit Earth and the moon resulted from the debris of both that body and Earth. Since the moon has no atmosphere or clouds, the components in the solids of the moon do not weather chemically as they would have with the presence of an atmosphere. Due to this fact, scientists believe the moon is approximately 4.5 billion years based on the oldest lunar rocks found.
The moon has played an important role in the Earth's development over billions of years. Earth's wobble is stabilized by the presence of the moon, which has led to a more stable climate over long periods of time. This may have affected the course of the development and growth of life on Earth.
As of today there have been 71 missions to the moon, 12 people have landed on the moon, and in 1998 the Lunar Prospector spacecraft reported finding water ice on the moon's surface.
Saad Dar -- 2002
|
|One of our pieces on|
|The science of life|
Amino acids are substances formed from molecules that consist of atoms from specific functional groups. They are the building blocks of protein molecules. Amino acids can be found inside all known organisms, and unique varieties can be created artificially.
The name "amino acid" comes from the presence of two groups: an amine (-NH2) and an acid (-COOH). Any molecule containing both these groups is technically an amino acid, but the term is generally used for a selection of short molecules of biological importance, such as the simplest, glycine, with the structure H2N-CH2-COOH. These molecules differ by their side chains, which give each a different property such as polarity, the property that governs whether the molecule is attractive to water or not. (This property is important in protein folding as described below.)
The amine group of an amino acid can chemically join to the acid group of another amino acid to form an amide bond. This enables long chains of amino acids to form. These amino acid based polymers are known as peptides and can vary in length from just two to several hundred or thousands.
An important property of the natural occurring amino acids is their stereochemistry. The central carbon atom in the naturally occurring amino acids is chiral with four different groups attached to it. These four different groups can be arranged differently in space to produce mirror images of each other, described in lay terms as left or right-handed in reference to how hands are essentially identical but mirror images that cannot be superimposed.
Implications of Stereochemistry
Almost all amino acids have the same absolute stereochemical configuration, the (S) configuration. This is at odds with intuition that would suggest a 50:50 mixture of each, as both forms are of equal energy and with equal chemical properties, there is no way to distinguish between them without another chiral source.
Although stereochemistry can be conserved from chemical precursors, this doesn't fully explain where it initially came from, leading some people to conclude that there must be some form of design or creation going on. The naturalistic explanation of why one form was favoured over another is not fully formed, although creationists do neglect the presence of diastereoisomers (which occur when two stereoisomers interact and are not chemically or energetically equivalent) or the presence of other chiral agents such as polarised light.
In the genetic code made of DNA, a group of three nucleotides (a codon) encodes for a specific amino acid. As a ribosome moves along the DNA strand, a collection of codons will be read to produce a polypeptide. Sufficiently long chains, with the assistance of other enzymes, then fold up to become active proteins or other enzymes.
Essential amino acids
Of the 20 amino acids used in human biology, (plus one special rare amino acid usually not counted) the human body can naturally synthesize 12 of them. This means the other 9 must be obtained from outside sources, i.e. by eating protein. Just about any protein source will have at least some amount of the essential amino acids in it — but if you subsist entirely on fats and carbohydrates with no protein intake, you'll eventually get a protein deficiency condition such as Kwashiorkor.
|Articles in RationalWiki related to Molecular biology|
|Adenosine triphosphate - Blotting - Enzyme - Eukaryote - Gene expression - Genomics - Golgi bodies - Molecularbiology - Prokaryote - Protein|
|
The Rock Pigeon (Columba livia), or Rock Dove, is a member of the bird family Columbidae (doves and pigeons). In common usage, this bird is often simply referred to as the "pigeon". The species includes the domestic pigeon, and escaped domestic pigeons have given rise to the feral pigeon.
Wild Rock Pigeons are pale grey with two black bars on each wing, although domestic and feral pigeons are very variable in colour and pattern. There are few visible differences between males and females. The species is generally monogamous, with two squabs (young) per brood. Both parents care for the young for a time.
Habitats include various open and semi-open environments, including agricultural and urban areas. Cliffs and rock ledges are used for roosting and breeding in the wild. Originally found wild in Europe, North Africa, and western Asia, feral Rock Pigeons have become established in cities around the world. The species is abundant, with an estimated population of 17 to 28 million feral and wild birds in Europe.
The species is also known as the Rock Dove or Blue Rock Dove, the former being the official name used by the British Ornithologists' Union and the American Ornithologists' Union until 2004, at which point they changed their official listing of the bird to Rock Pigeon. In common usage, this bird is often simply referred to as the "pigeon". Baby pigeons are called squabs.
The adult female is almost identical to the male, but the iridescence on the neck is less intense and more restricted to the rear and sides, while that on the breast is often very obscure.
The white lower back of the pure Rock Pigeon is its best identification character, the two black bars on its pale grey wings are also distinctive. The tail has a black band on the end and the outer web of the tail feathers are margined with white. It is strong and quick on the wing, dashing out from sea caves, flying low over the water, its lighter grey rump showing well from above.
Young birds show little lustre and are duller. Eye colour of the pigeon is generally an orange colour but a few pigeons may have white-grey eyes. The eyelids are orange in colour and are encapsulated in a grey-white eye ring. The feet are red to pink.
When circling overhead, the white underwing of the bird becomes conspicuous. In its flight, behaviour, and voice, which is more of a dovecot coo than the phrase of the Wood Pigeon, it is a typical pigeon. Although it is a relatively strong flier, it also glides frequently, holding its wings in a very pronounced V shape as it does. Though fields are visited for grain and green food, it is nowhere so plentiful as to be a pest.
Pigeons feed on the ground in flocks or individually. They roost together in buildings or on walls or statues. When drinking, most birds take small sips and tilt their heads backwards to swallow the water. Pigeons are able to dip their bills into the water and drink continuously without having to tilt their heads back. When disturbed, a pigeon in a group will take off with a noisy clapping sound.
Homing pigeons, are well known for their ability to find their way home from long distances. Despite these demonstrated abilities, wild Rock Pigeons are rather sedentary and rarely leave their local areas.
The Rock Pigeon has a restricted natural resident range in western and southern Europe, North Africa, and into South Asia. The Rock Pigeon is often found in pairs in the breeding season but is usually gregarious. The species (including ferals) has a large range, with an estimated global extent of occurrence of 10 million km². It has a large global population, including an estimated 17–28 million individuals in Europe Fossil evidence suggests the Rock Pigeon originated in southern Asia and skeletal remains unearthed in Israel confirm their existence there for at least three hundred thousand years. Its habitat is natural cliffs, usually on coasts. Its domesticated form, the feral pigeon, has been widely introduced elsewhere, and is common, especially in cities, over much of the world. In Great Britain, Ireland and much of its former range. A Rock Pigeon's life span is anywhere from 3–5 years in the wild to 15 years in captivity, though longer-lived specimens have been reported. The species was first introduced to North America in 1606 at Port Royal, Nova Scotia.
The Rock Pigeon breeds at any time of the year, but peak times are spring and summer. Nesting sites are situated along coastal cliff faces, as well as the artificial cliff faces created by apartment buildings with accessible ledges or roof spaces.
The type of nest constructed is a flimsy platform of straw and sticks, put on ledge, under cover. Often window ledges of buildings. Two white eggs are laid with incubation that is shared by both parents lasting from seventeen to nineteen days.
Rock Pigeons have been domesticated for several thousand years, giving rise to the domestic pigeon (Columba livia domestica). As well as pets, domesticated pigeons are utilised as homing pigeons and carrier pigeons, and so-called war pigeons have served and played important roles during wartimes, with many pigeons having received bravery awards and medals for their services in saving hundreds of human lives: including, notably, the French pigeon Cher Ami who received the Croix de Guerre for his heroic actions during World War I, and the Irish Paddy and the American G.I. Joe, who both received the Dickin Medal, amongst 32 pigeons to receive this medallion, for their gallant and brave actions during World War II. There are numerous breeds of fancy pigeons of all sizes, colours and types.
|
Date of this Version
Standard techniques for the analysis of prehistoric soils have not been devised. It is unlikely that any single technique is applicable to all types of fecal remains. This is due to various environmental conditions which effect the preservation of helminth ova. In general, gravitational sedimentation is a useful technique for isolating helminth eggs and larvae from coprolites. Latrine soils pose greater problems for helminthological examination. Although various clinical techniques have been successfully utilized in soil study, it is important to remember that some latrine soils have not yielded helminth eggs to any clinical technique. Consequently the paleoparasitologist must be ready to innovate new techniques rather than depend on clinical techniques.
Beyond the problems of technique, what research done with identification of parasites is very encouraging. At this point it appears that the measurement and morphological characteristics used to identify modern parasites can also be applied to paleoparasites.
The trends of paleoparasitological research today emphasize experimentation and quantification as well as precise identification. In the future, these trends will lead to a more rigorous study of parasites in prehistory.
|
Sweat glands are responsible for the production of sweat, which helps to cool the body. There are two types of sweat glands: apocrine and eccrine. Eccrine sweat glands are found all over the body, while apocrine sweat glands are only found in certain areas, such as the armpits and groin. The main difference between these two types of sweat glands is their function. Eccrine sweat glands help to cool the body, while apocrine sweat glands play a role in sexual attraction.
What are Apocrine Sweat Glands?
Apocrine sweat glands are a type of sweat gland located in the body. They are found in areas where there is a high concentration of hair follicles, such as the armpits and groin. Apocrine sweat glands produce a thicker, more oily secretion than other types of sweat gland. This secretion is composed of lipids, proteins, and myristic acid, which gives it its characteristic odor.
Apocrine sweat glands are activated during puberty and remain active throughout adulthood. In contrast, eccrine sweat glands are found throughout the body and are responsible for producing the clear, watery sweat that helps to regulate body temperature. Apocrine sweat glands are not essential for survival, but they do play a role in human social behavior. The strong odor of apocrine sweat is thought to attract mates and help people to identify their kin.
What are Eccrine Sweat Glands?
- Eccrine sweat glands are located all over the human body, with the highest density on the palms of the hands and the soles of the feet. These glands secrete a clear, odorless fluid that helps to regulate body temperature.
- Eccrine sweat glands are controlled by the sympathetic nervous system, and they become active when the body is under stress. When eccrine sweat glands are stimulated, they secrete a small amount of fluid onto the surface of the skin.
- This process is known as sweating, and it helps to cool the body by evaporating the sweat. Eccrine sweat glands are an important part of the human body’s cooling system, and they play a vital role in regulating body temperature.
Difference between Apocrine and Eccrine Sweat Glands
- Apocrine and eccrine sweat glands are both types of sweat glands that are found in human skin. Apocrine sweat glands are usually found in the armpits and groin, while eccrine sweat glands are found all over the body.
- Apocrine sweat glands produce a thicker, more viscous sweat that is high in fat and protein. This type of sweat is secreted when the body is under stress or during exercise.
- Eccrine sweat glands, on the other hand, produce a thinner, more watery sweat that is mostly composed of water and salt. This type of sweat is secreted when the body is trying to regulate its temperature. Both types of sweat glands play an important role in human physiology, but they have different functions.
Eccrine sweat glands are the most common type and they produce a thin, watery sweat. Apocrine sweat glands are found in areas where there is an abundance of hair follicles, such as the scalp, armpits, and groin. These glands produce thick, oily sweat that contains fatty acids and proteins. The main difference between eccrine and apocrine sweat glands is what they release into the environment. Eccrine sweat is mostly water with some minerals like sodium and chloride while apocrine sweat has high levels of fats and proteins.
|
Even before the Bird Observatory started, (from 1954) there were counts of breeding pairs of Hen and Marsh Harriers within the reserve. The breeding of Hen Harriers ended after the year 1962 but the number of pairs of the Marsh Harrier has developed positively. Over the years there have been many studies including territoriality, breeding biology and population monitoring.
Territory Studies of Marsh Harrier have shown that birds distinguish a “nest territory”, a small area adjacent to the nest being defended against all intruders such as other harriers, eagles, foxes, minks and crows, and the hunting territory, which is just defended against other harriers. Territory boundaries are established early in the breeding season, and later contact between the territory holders, mostly males, are relatively peaceful.
“Escort” is a term that was defined during these studies, and that can be used to describe when a territorial male chases off a trespassing male by following him at a certain distance until outside of his hunting grounds. This behavior is highly characteristic and easy to observe. It looks so peaceful that you might think that two males together fly away to hunt.
The emergence of special nesting and hunting territories has probably to do with the availability of suitable nesting habitat (any field of reeds) in many places this is limited. Then it may be appropriate for birds to nest closer together. A sparse colony can also be good for nestlings, if they want to defend themselves from predation by, for example, sea eagles.
Breeding Biology Studies 1965-1996
A lot of breeding biology studies have been done on the Marsh Harrier. On the 19th-20th June 1972 the breeding activities of two nests were followed continuously during the day. The nestlings were then about twelve days old. In one nest there were four, and the other just one nestling. The males were getting all of the prey. 17 prey items were delivered to the clutch with four nestlings and eight to the second nest, with a lone nestling. The highest hunting activity was in the afternoon between 13:00-17:00. Females guarded the nest against intruders and delivered the prey to their nests. Their remaining time was occupied mainly through nest improvements. The female with the most offspring was noted to bring material to the nest 38 times, mostly between 06:00 and 08:00.
In 1992, we studied relationships and territorial exploitation of Kvismaren. As many as 14 nests were found, and in the now relatively dense population polygamy found in three cases. In each case a male had two females. Early arrival in spring and well developed plumage were common denominators for the polygamous males. At least two of the secondary females were able to choose another mate when more males arrived, but they stayed with the first selected. An absolute difference in breeding performance was found to exist between primary and secondary females in this year. In total 37 nestlings were ringed that year, this is very high productivity for Kvismaren.
In 1995 the hunting of the Marsh Harriers was studied in more detail. This year there were two polygamous males identified. This year was unusual with a relatively cold and rainy spring including snow in mid-May. The results showed that the harriers can use a wide range of prey items from voles as their staple food to birds, chicks (of Coot) and fish. Even a nearly full-grown young of Lapwing was noted to be Marsh Harrier food. A logical, but still interesting result was the finding that females of polygamous males shared their hunting areas. In 1996, the studies of harriers continued, now with the focus on females. They claimed the same territory as males and began to hunt more frequently in July.
|
A makefile is a text file that is referenced by the make command that describes the building of targets, and contains information such as source-level dependencies and build-order dependencies.
The CDT can generate a makefile for you, such projects are called Managed Make projects. Some projects, known as Standard Make projects, allow you to define your own makefile.
# A sample Makefile # This Makefile demonstrates and explains # Make Macros, Macro Expansions, # Rules, Targets, Dependencies, Commands, Goals # Artificial Targets, Pattern Rule, Dependency Rule. # Comments start with a # and go to the end of the line. # Here is a simple Make Macro. LINK_TARGET = test_me.exe # Here is a Make Macro that uses the backslash to extend to multiple lines. # This allows quick modification of more object files. OBJS = \ Test1.o \ Test2.o \ Main.o # Here is a Make Macro defined by two Macro Expansions. # A Macro Expansion may be treated as a textual replacement of the Make Macro. # Macro Expansions are introduced with $ and enclosed in (parentheses). REBUILDABLES = $(OBJS) $(LINK_TARGET) # Make Macros do not need to be defined before their Macro Expansions, # but they normally should be defined before they appear in any Rules. # Consequently Make Macros often appear first in a Makefile. # Here is a simple Rule (used for "cleaning" your build environment). # It has a Target named "clean" (left of the colon ":" on the first line), # no Dependencies (right of the colon), # and two Commands (indented by tabs on the lines that follow). # The space before the colon is not required but added here for clarity. clean : rm -f $(REBUILDABLES) echo Clean done # There are two standard Targets your Makefile should probably have: # "all" and "clean", because they are often command-line Goals. # Also, these are both typically Artificial Targets, because they don't typically # correspond to real files named "all" or "clean". # The rule for "all" is used to incrementally build your system. # It does this by expressing a dependency on the results of that system, # which in turn have their own rules and dependencies. all : $(LINK_TARGET) echo All done # There is no required order to the list of rules as they appear in the Makefile. # Make will build its own dependency tree and only execute each rule only once # its dependencies' rules have been executed successfully. # Here is a Rule that uses some built-in Make Macros in its command: # $@ expands to the rule's target, in this case "test_me.exe". # $^ expands to the rule's dependencies, in this case the three files # main.o, test1.o, and test2.o. $(LINK_TARGET) : $(OBJS) g++ -g -o $@ $^ # Here is a Pattern Rule, often used for compile-line. # It says how to create a file with a .o suffix, given a file with a .cpp suffix. # The rule's command uses some built-in Make Macros: # $@ for the pattern-matched target # $lt; for the pattern-matched dependency %.o : %.cpp g++ -g -o $@ -c $< # These are Dependency Rules, which are rules without any command. # Dependency Rules indicate that if any file to the right of the colon changes, # the target to the left of the colon should be considered out-of-date. # The commands for making an out-of-date target up-to-date may be found elsewhere # (in this case, by the Pattern Rule above). # Dependency Rules are often used to capture header file dependencies. Main.o : Main.h Test1.h Test2.h Test1.o : Test1.h Test2.h Test2.o : Test2.h # Alternatively to manually capturing dependencies, several automated # dependency generators exist. Here is one possibility (commented out)... # %.dep : %.cpp # g++ -M $(FLAGS) $< > $@ # include $(OBJS:.o=.dep)
Frequently Asked Questions:Your Console view can be very useful for debugging a build.
Q1. My Console view says
Error launching builder. What does that mean?
Error launching builder (make -k clean all ) (Exec error:Launching failed)
Most probably, the build command (by default "make") is not on your path. You can put it on your path and restart Eclipse.
You can also change the build command to something that is on your path. If you are using MinGW tools to compile, you should replace the build command with "mingw32-make".
Q2. My Console view says
No rule to make target 'X'.
make -k clean all make: *** No rule to make target 'clean'. make: *** No rule to make target 'all'.
By default, the make program looks for a file most commonly called "Makefile" or "makefile". If it cannot find such a file in the working directory, or if that file is empty or the file does not contain rules for the command line goals ("clean" and "all" in this case), it will normally fail with an error message similar to those shown.
If you already have a valid Makefile, you may need to change the working directory of your build. The default working directory for the build command is the project's root directory. You can change this by specifying an alternate Build Directory in the Make Project properties. Or, if your Makefile is named something else (eg. buildFile.mk), you can specify the name by setting the default Build command to make -f buildFile.mk.
If you do not have a valid Makefile, create a new file named Makefile in the root directory. You can then add the contents of the sample Makefile (above), and modify it as appropriate.
Q3. My Console view says "missing separator".
make -k clean all makefile:12: *** missing separator. Stop.
The standard syntax of Makefiles dictates that every line in a build rule must be preceded by a Tab character. This Tab character is often accidentally replaced with spaces, and because both result in white-space indentation, this problem is easily overlooked. In the sample provided, the error message can be pinpointed to line 12 of the file "makefile"; to fix the problem, insert a tab at the beginning of that line.
Q4. My Console view says
Target 'all' not remade because of errors.
make -k clean all make: *** [clean] Error 255 rm -f Test1.o Test2.o Main.o test_me.exe g++ -g -o Test1.o -c Test1.cpp make: *** [Test1.o] Error 255 make: *** [Test2.o] Error 255 make: *** [Main.o] Error 255 g++ -g -o Test2.o -c Test2.cpp g++ -g -o Main.o -c Main.cpp make: Target 'all' not remade because of errors.
The likely culprit here is that g++ is not on your Path.
The Error 255 is produced by make as a result of its command shell not being able to find a command for a particular rule.
Messages from the standard error stream (the lines saying Error 255) and standard output stream (all the other lines) are merged in the Console view here.
Q5. What's with the -k flag?
The -k flag tells make to continue making other independent rules even when one rule fails. This is helpful for build large projects.
You can remove the -k flag by turning on Project Properties > C/C++ Make Project > Make Builder > Stop on first build error
Q6. My Console view looks like:
mingw32-make clean all process_begin: CreateProcess((null), rm -f Test1.o Test2.o Main.o test_me.exe, ...) failed. make (e=2): The system cannot find the file specified. mingw32-make: *** [clean] Error 2 rm -f Test1.o Test2.o Main.o test_me.exe
This means that mingw32-make was unable to find the utility "rm". Unfortunately, MinGW does not come with "rm". To correct this, replace the clean rule in your Makefile with:
clean : -del $(REBUILDABLES) echo Clean done
The leading minus sign tells make to consider the clean rule to be successful even if the del command returns failure. This may be acceptable since the del command will fail if the specified files to be deleted do not exist yet (or anymore).
|
The future may be closer than we think. Self-driving cars are being tested by a number of different manufacturers from Toyota to Google and more. Driverless cars may eventually reduce or replace the need for human input on the road altogether. But how do self-driving cars work?
How do Self-Driving Cars Work?
The technology that goes into self-driving cars have a lot to take into consideration from traffic lights, signs, pedestrians, traffic, and more.
It can be easy to think that autonomous vehicles are controlled by an omnipotent, remote control center somewhere and that there are rules upon rules about how cars should act in every conceivable scenario.
The reality is that each car has an individual control system that relies on a combination of GPS, radar, lasers, and cameras to judge what’s around it. This includes pedestrians, obstacles, obstructions, other cars, and any other scenario that might crop up on the road.
Driverless vehicles largely use lidar, a term short for ‘light detection and ranging’, to “see” what’s around them. Lidar is a system that produces a 3D image of everything surrounding the car in real-time. The vehicle’s onboard control system then interprets these findings in conjunction with cameras, radars, GPS tracking, as well as deliberative architecture to respond to situations as they unfold.
Deliberative architecture is the technology capable of making decisions by combining the 3D map that is constantly generated by lidar and cameras and using that information to navigate among other cars, avoid obstacles and pedestrians, and even find shortcuts around traffic delays.
Driverless cars have what are known as ‘actuators’ to control things like acceleration, braking, and steering. Just like a person reacts when someone swerves into their lane, self-driving cars receive a signal from the onboard computer to either move or apply the brakes based on what’s around it.
At this time, fully roboticized cars do not exist. As the technology continues to develop and improve, driverless cars will continue to have steering wheels, brakes, and a manual override, as well as seatbelts for passengers. These are known as ‘redundant systems’ and allow passengers to regain control of the vehicle in the event of a situation that the car cannot interpret.
Still a Long Way to Go Before Self-Driving Cars are a Regular Feature on the Road
Eventually, the detection systems on driverless cars will be more accurate than human interpretation. Radars and lidar will be able to ‘see’ in conditions such as heavy rain, snow, or fog, or night. With current technology, cars are still limited by distance. Toyota is aiming to test self-driving cars on the road as early as 2020.
There are lots of benefits to consider when it comes to self-driving cars. They have the potential to:
- Reduce road deaths
- Reduce fuel consumption
- Improve the mobility of elderly or disabled people
- Free up time for the morning commute
Many advances need to be made before autonomous cars can adapt to all scenarios, and it will likely be a long time before these vehicles become a regular feature on the road.
Now that you know how self-driving cars work, you can share your newfound knowledge about this exciting new technology with friends and family.
|
The loss of ozone over Antarctica in the southern hemisphere is relatively well documented and popularly known, especially within Australia where for residents of southern states (like the island state of Tasmania) venturing out into the sun during summer is downright dangerous.
Simply put, conditions in the Arctic — on the other side of the planet — are not conducive to ozone loss.
However, in 2011, that’s exactly what happened, when ozone levels in the atmosphere over the Arctic were 20% less than the average.
Ozone loss requires three key ingredients we as humans produce — chlorine from man-made chlorofluorocarbons (CFCs), frigid temperatures and sunlight. These three ingredients are not normally present in the Arctic atmosphere at the same time.
A new study by NASA, however, has found that while chlorine in the Arctic atmosphere was the primary culprit for what has become known as the Arctic ozone hole of 2011, unusually cold and persistent temperatures also helped the ozone depletion. On top of that, uncommon atmospheric conditions blocked wind-driven transport of ozone from the tropics, which subsequently halted the supply of ozone until April.
“You can safely say that 2011 was very atypical: In over 30 years of satellite records, we hadn’t seen any time where it was this cold for this long,” said Susan E. Strahan, an atmospheric scientist at NASA Goddard Space Flight Center in Greenbelt, Md., and main author of the new paper, which was recently published in the Journal of Geophysical Research-Atmospheres.
“Arctic ozone levels were possibly the lowest ever recorded, but they were still significantly higher than the Antarctic’s,” Strahan said. “ There was about half as much ozone loss as in the Antarctic and the ozone levels remained well above 220 Dobson units, which is the threshold for calling the ozone loss a ‘hole’ in the Antarctic – so the Arctic ozone loss of 2011 didn’t constitute an ozone hole.”
|
Affected people have difficulty communicating with and relating to others.
People with an autism spectrum disorder also have restricted patterns of behavior, interests, and/or activities and often follow rigid routines.
Diagnosis is based on observation, reports of parents and other caregivers, and standardized autism-specific screening tests.
Most people respond best to highly structured behavioral interventions.
Autism spectrum disorders (ASDs) are neurodevelopmental disorders Definition of Developmental Disorders Developmental disorders are better called neurodevelopmental disorders. Neurodevelopmental disorders are neurologically based conditions that can interfere with the acquisition, retention, or... read more .
Autism spectrum disorders are considered a spectrum (range) of disorders because the manifestations vary widely in type and severity. Previously, ASDs were subclassified into classic autism, Asperger syndrome, Rett syndrome, childhood disintegrative disorder, and pervasive developmental disorder not otherwise specified. However, there was so much overlap that it was hard to make distinctions, so doctors currently do not use this terminology and consider these all as ASDs (except for Rett syndrome Rett Syndrome Rett syndrome is a rare neurodevelopmental disorder caused by a genetic problem that occurs almost exclusively in girls and affects development after an initial 6-month period of normal development... read more , which is a distinct genetic disorder). ASDs are different from intellectual disability Intellectual Disability Intellectual disability is significantly below average intellectual functioning present from birth or early infancy, causing limitations in the ability to conduct normal activities of daily... read more , although many people with ASDs have both. The classification system emphasizes that, within the broad spectrum, different features may occur more or less strongly in a given individual.
These disorders occur in about 1 of 54 people in the United States and are 4 times more common among boys than among girls. The estimated number of people identified with an autism spectrum disorder has risen because doctors and caregivers have learned more about the symptoms of the disorder.
Causes of Autism Spectrum Disorders
The specific causes of autism spectrum disorders are not fully understood, although they are often related to genetic factors. For parents of one child with an ASD, risk of having another child with an ASD is around 3 to 10%. Several genetic abnormalities, such as Fragile X syndrome Fragile X Syndrome Fragile X syndrome is a genetic abnormality on the X chromosome that leads to intellectual disability and behavior problems. Chromosomes are structures within cells that contain DNA and many... read more , tuberous sclerosis complex Tuberous Sclerosis Complex Tuberous sclerosis complex is a hereditary disorder that causes abnormal growths in the brain, changes in the skin, and sometimes tumors in vital organs, such as the heart, kidneys, and lungs... read more , and Down syndrome Down Syndrome (Trisomy 21) Down syndrome is a chromosome disorder caused by an extra chromosome 21 that results in intellectual disability and physical abnormalities. Down syndrome is caused by an extra chromosome 21... read more , may be associated with ASD.
Prenatal infections, for example, viral infections such as rubella Rubella Rubella is a contagious viral infection that typically causes mild symptoms, such as joint pain and a rash, but can cause severe birth defects if the mother becomes infected with rubella during... read more or cytomegalovirus Cytomegalovirus (CMV) Infection in Newborns Cytomegalovirus is a common virus that usually causes few or no problems but can cause serious illness in infants who are infected before birth or around the time of birth. Cytomegalovirus infection... read more , may play a role. Prematurity may also be a risk factor: the greater the level of prematurity, the greater the risk of an ASD.
Some children who have an ASD have differences in how their brain is formed and how it functions.
It is clear, however, that ASDs are not caused by poor parenting, adverse childhood conditions, or vaccinations (see also MMR vaccine and concerns about autism Measles-mumps-rubella (MMR) vaccine and concerns about autism Despite the strong vaccine safety systems in place in the United States, some parents remain concerned about the use and schedule of vaccines in children. These concerns can lead some parents... read more ).
Did You Know...
Symptoms of Autism Spectrum Disorders
Symptoms of autism spectrum disorders may appear in the first 2 years of life, but in milder forms symptoms may not be detected until school age.
Children with an autism spectrum disorder develop symptoms in the following areas:
Social communications and interactions
Restricted, repetitive patterns of behavior
Symptoms of an autism spectrum disorder range from mild to severe, but most people require some level of support in both areas. People with an ASD vary widely in their ability to function independently in school or society and in their need for supports. In addition, about 20 to 40% of children with an ASD, particularly those with an IQ less than 50, develop seizures Seizure Disorders In seizure disorders, the brain's electrical activity is periodically disturbed, resulting in some degree of temporary brain dysfunction. Many people have unusual sensations just before a seizure... read more before reaching adolescence. In about 25% of affected children, a loss of previously acquired skills (regression in development) occurs around the time of diagnosis and may be the initial indicator of a disorder.
Social communications and interactions
Often, infants with an ASD cuddle and make eye contact in atypical ways. Although some affected infants become upset when separated from their parents, they may not turn to parents for security as do other children. Older children often prefer to play by themselves and do not form close personal relationships, particularly outside of the family. When interacting with other children, they may not use eye contact and facial expressions to establish social contact, and they have difficulty interpreting the moods and expressions of others. They may have difficulty knowing how and when to join a conversation and difficulty recognizing inappropriate or hurtful speech. These factors may cause others to view them as odd or eccentric and thus lead to social isolation.
The most severely affected children never learn to speak. Those who learn may do so much later than normal and use words in an unusual way. They often repeat words spoken to them (echolalia), use memorized scripted speech in place of more spontaneous language, or reverse the normal use of pronouns, particularly using you instead of I or me when referring to themselves. Conversation may not be interactive, and, when present, is used more to label or request than to share ideas or feelings. People with an autism spectrum disorder may speak with an unusual rhythm and pitch.
Behavior, interests, and activities
People with an autism spectrum disorder are often very resistant to changes, such as new food, toys, furniture arrangement, and clothing. They may become excessively attached to particular inanimate objects. They often do things repetitively. Younger and/or more severely affected children often repeat certain acts, such as rocking, hand flapping, or spinning objects. Some may injure themselves through repetitive behaviors such as head banging or biting themselves. Less severely affected people may watch the same video multiple times or insist on eating the same food every meal. People with an ASD often have very specialized, often unusual interests. For instance, a child may be preoccupied with vacuum cleaners.
People with an autism spectrum disorder often have over-reactions or under-reactions to sensations. They may be extremely repelled by certain odors, tastes, or textures, or react unusually to painful, hot, or cold sensations that other people find distressing. They may ignore some sounds and be extremely bothered by others.
Many people with an ASD have some degree of intellectual disability Intellectual Disability Intellectual disability is significantly below average intellectual functioning present from birth or early infancy, causing limitations in the ability to conduct normal activities of daily... read more (an IQ less than 70). Their performance is uneven. They usually do better on tests of motor and spatial skills than on verbal tests. Some people with an ASD have idiosyncratic or "splinter" skills, such as the ability to carry out complex mental arithmetic or advanced musical skills. Unfortunately, such people often cannot use these skills in a productive or socially interactive way.
Diagnosis of Autism Spectrum Disorders
A doctor's evaluation
Reports of parents and other caregivers
Standardized autism-specific screening tests
The diagnosis of an autism spectrum disorder is made by close observation of the child in a playroom setting and careful questioning of parents and teachers. Standardized autism-specific screening tests, such as the Social Communication Questionnaire for older children and the Modified Checklist for Autism in Toddlers, Revised, with Follow-Up (M-CHAT-R/F), may help identify children who need more in-depth testing. Psychologists and other specialists may use the more extensive Autism Diagnostic Observation Schedules and other tools.
In addition to giving standardized tests, doctors do certain blood or genetic tests to look for underlying treatable or inherited medical disorders, such as hereditary metabolic disorders Overview of Hereditary Metabolic Disorders Hereditary metabolic disorders are inherited genetic conditions that cause metabolism problems. Heredity is the passing of genes from one generation to the next. Children inherit their parents'... read more and Fragile X syndrome Fragile X Syndrome Fragile X syndrome is a genetic abnormality on the X chromosome that leads to intellectual disability and behavior problems. Chromosomes are structures within cells that contain DNA and many... read more .
Prognosis for Autism Spectrum Disorders
The symptoms of autism spectrum disorders generally persist throughout life. The prognosis is strongly influenced by how much usable language the child has acquired by elementary school age. Children with an ASD who have lower measured intelligence—for example, those who score below 50 on standard IQ tests—are likely to need more intensive support as adults.
Treatment of Autism Spectrum Disorders
Applied behavior analysis
Speech and language therapy
Sometimes drug therapy
Applied behavior analysis (ABA) is an approach to therapy in which children are taught specific cognitive, social, or behavioral skills in a stepwise fashion. Small improvements are reinforced and progressively built upon to improve, change, or develop specific behaviors in children who have an ASD. These behaviors include social skills, language and communication skills, reading, and academics as well as learned skills such as self-care (for example, showering and grooming), daily-living skills, punctuality, and job competence. This therapy is also used to help children minimize behaviors (for example, aggression) that may interfere with their progress. Applied behavior analysis therapy is tailored to meet the needs of each child and is typically designed and supervised by professionals certified in behavior analysis. In the United States, ABA may be available as part of an Individualized Educational Plan (IEP) through schools and in some states is covered by health insurance. Another intensive behaviorally based intervention is the Developmental, Individual-differences, and Relationship-based (DIR®) model, also called Floortime. DIR® draws on the child's interests and preferred activities to help build social interaction skills and other skills. At present, there is less evidence to support DIRFloortime® than ABA, but both therapies can be effective.
Educational programs for school-aged children with an ASD should address social skills development and speech and language delays and help prepare children for education after high school or for employment.
The federal Individuals with Disabilities Education Act (IDEA) requires public schools to provide free and appropriate education to children and adolescents with an ASD. Education must be provided in the least restrictive, most inclusive setting possible—that is, a setting where the children have every opportunity to interact with nondisabled peers and have equal access to community resources. The Americans with Disability Act and Section 504 of the Rehabilitation Act also provide for accommodations in schools and other public settings.
Drug therapy cannot change the underlying disorder. However, the selective serotonin reuptake inhibitors (SSRIs), such as fluoxetine, paroxetine, and fluvoxamine, are often effective in reducing ritualistic behaviors of people with an ASD. Antipsychotic drugs, such as risperidone, may be used to reduce self-injurious behavior, although the risk of side effects (such as weight gain and movement disorders) must be considered. Mood stabilizers and psychostimulants may be helpful for people who are inattentive or impulsive or who have hyperactivity.
Although some parents try special diets, gastrointestinal therapies, or immunologic therapies, currently there is no good evidence that any of these therapies are helpful in children with an autism spectrum disorder. Other complementary therapies, such as facilitated communication, chelation therapy, auditory integration training, and hyperbaric oxygen therapy, have not been proved effective. In considering such treatments, families should consult with the child's primary care physician regarding benefits and risks.
The following are English-language resources that may be useful. Please note that THE MANUAL is not responsible for the content of these resources.
Individuals with Disabilities Education Act (IDEA): A United States law that makes available free appropriate public education to eligible children with disabilities and ensures special education and related services to those children
Americans with Disability Act: A United States law that prohibits discrimination based on disability
Section 504 of the Rehabilitation Act: A United States law that guarantees certain rights to people who have disabilities
These organizations provide support, community, and educational resources for people with and caregivers of people with autism:
|
Stewards of Biodiversity: How are Indigenous Tribes Helping in Wildlife Conservation?
For innumerable millennia, indigenous tribes in different corners of the world have been helping in wildlife conservation. They have walked the same path as tigers, drank the same water as wild buffaloes, and sang the same songs as native birds.
Living lives on the values of ‘no waste’ and ‘giving back to nature’ have helped conserve flora and fauna. However, the tribal people’s efforts to help conserve wildlife have frequently been ignored.
The indigenous and ethnic people have learned to live in the most hostile environmental conditions. But most interestingly, they live in localities that are immensely rich in biodiversity.
It is estimated that about 300 million indigenous people are living in the world, out of which nearly half, i.e. 150 million live in Asia, about 30 million in Central and South America, and the rest scattered across Australia, Europe, New Zealand, Africa, and the Soviet Union.
The indigenous people have played a vital role in the conservation of environmental management and development process as they possess unparalleled traditional knowledge regarding eco-restoration. There is no doubt that these people know how to live in harmony with nature.
But the attempts at “civilizing” them by providing substitute shelter areas and livelihood options have brought about more harm than good. Poor state policies on wildlife protection, afforestation, climate change, and tribal eviction have cost the forests and wildlife irreparable changes.
Many studies over the past decades have shown that stewardship by forest-dwelling communities considerably slows the rate of forest degradation.
Since diminishing and acclimating to climate change needs sustainable forest management, the tribal people who have been living in and around the forest for millennia could actually play a crucial part.
Indigenous tribes that have been helping in wildlife conservation can offer incomparable wisdom to save various threatened flora and fauna species.
Indigenous Tribes and Wildlife
The low-carbon-footprint lifestyle of the tribal people has conserved the global environment for millennia and their wisdom and sustainable methods should be recognized, adopted, and promoted to effectively reduce harm to the environment and wildlife.
Indigenous people’s lands may harbor a significant proportion of threatened and endangered species globally. These lands cover more than one-quarter of the earth, of which a significant proportion is still free from industrial level human impacts.
History has shaped and reshaped the relationship between native people and animals. Traditionally, animals have held several integral roles in the values of every tribal cultural group throughout the globe.
In many tribal belief systems, animals are treated and revered as sentient beings, and humans are not allowed to harm or hunt these creatures. In a vast number of tribal cultures animals are treated as equals.
Various tribal creation stories often feature animals as playing a pivotal role in the creation of the universe, the earth, and the emergence of human beings.
Most tribal belief systems centralize human-animal relations as having a spiritual and reciprocal connection. Rarest of native animals and plants can find a haven on tribal lands, Tribes have always protected their lands and native habitat.
Tribal folklore help understand and protect various animal species. Their laws against hunting and animal killing can help preserve animal species in contemporary times.
Tribal people’s survival depends on the land they have lived in harmony for generations, yet they are being evicted from protected areas in the name of conservation. The modern idea of conservation removes local people and puts a fence around forests – at the same time letting in tourists or even allowing mining.
Over the past century, governments have been struggling to protect wildlife amid countless environmental and anthropogenic activities.
While indigenous people can offer tremendous help in preserving the wildlife, their contribution has been widely excluded. Despite the widespread nature of these problems, however, contemporary tribal animal law has largely been absent from legal scholarship.
Tribal communities own or influence the management of tens of millions of acres of land. Tribal lands provide vital habitat for many threatened and endangered flora and fauna, many of which are both biologically and culturally significant to tribes.
Yet the tribal people are being illegally evicted from their ancestral homelands in the name of “conservation.” It is often wrongly claimed that their lands are wilderness even though tribal people have been dependent on, and managed, them for years.
This proven logic, of tribal communities helping in the protection of forests and their inhabitants, is highly criticized by conservationists as they firmly believe that the presence of tribal people in the forests is harmful to wildlife and the ecosystem.
In many countries, governments often encourage the eviction of tribal communities with an agenda to boost safari, create protected areas, and attract tourism – all of which frankly fragment the natural habitat and make the wildlife more vulnerable.
Various anthropogenic projects have infiltrated the forests inhabited by numerous tribal communities. Such projects disproportionally affected hundreds of thousands of tribal people, exhibiting the classic example of forced displacement and destruction of wildlife, alongside cultural sites.
Moreover, many evicted tribal households are denied any compensation as their ancestral claim to the land is abruptly denied by the authorities.
In the last two decades, there has been a growing concern by several international conservation-based organizations to protect the endangered wildlife species, with a special focus on Asia and Africa, The conservation versus people approach to protect wildlife has worsened the lives of native people and wildlife.
The tribal communities across India and most of Africa are in a pitiable condition, their status of land rights and tyrannical measures adopted by governments to separate tribal people from their ancestral land and forest.
No wonder, for thousands and thousands of years, the earth’s original people have faced hard challenges, ye they have managed to survive and conserve their natural environment.
And they are still conserving nature, in spite of modern “so-called civilized” humans, who have been systematically abusing their rights, stripping their lands, and confining the wildlife to ostensible reserves and protected areas while violating the forests themselves.
What can help?
Various experts have time and again insinuated that conservation initiatives need the involvement of local tribal communities – rather than the token activism of urban dwellers – to ensure that humans and animals coexist peacefully.
Satellite images and academic studies have shown that indigenous people provide a vital barrier to the deformation of their lands.
It is high time the respective governments and conservation organizations began duly acknowledging the critical role tribal people play in conservation, preservation, and safeguarding the richness of local biodiversity.
Tribal nations can incorporate customary and traditional principles into contemporary laws, so that tribal animal law can begin to untangle from years of colonial entrapment.
Several indigenous tribes around the world derive spiritual value from revering fauna and flora species. Conventional species conservation practices ignore this spiritual value and tribes are often evicted from protected areas.
The spiritual beliefs of various tribes can make them effective conservation stewards. A case study to assess the spiritual value of the Bengal tiger for the Soligas tribe (India), which showed how such values can be harnessed as an economic tool for promoting sustainable wildlife conservation.
For a native wildlife reintroduction to work, native habitat is needed, and tribes are to be appreciated for saving so much of natural habitat. Various tribes know different things about different native species and how to preserve them in the changing nature of the planet.
The Awá are an indigenous people of Brazil living in the eastern Amazon rainforest. They know at least 275 useful plants, and at least 31 kinds of honey-producing bees.
Similarly, Baka people – an ethnic group of nomadic pygmy people inhabiting the rainforests of eastern Cameroon and northern Gabon – have a diverse knowledge of the wildlife. Their traditional knowledge could impart conservational wisdom that the current conservationists lack.
In Botswana, the Bushman communities that have lived in the Kalahari Desert for generations, have been evicted. Meanwhile, the conservation area earlier occupied by them is being mined for diamonds and other non-renewables.
Baiga tribe in India has set up its own project to “save the forest from the forest department” – setting out rules for their own community and outsiders to protect the forest and its biodiversity.
The Baiga stand strictly against hunting tigers, whom they consider as their kin, but like many tribal people in India, and forcibly evicted from their ancestral homeland in the name of “tiger conservation,” while tourists are welcomed in.
Such people who consider wild animals a part of their tribes can help in protecting endangered flora and fauna far better if they are not forced out of their homelands and allowed to share their knowledge. There are many more examples of how tribal people are the best conservationist and guardians of the natural world.
Indigenous Tribes Helping in Wildlife Conservation
✦ South Asia’s tribal people have coexisted with the tiger for thousands of years. For instance, Nepal serves as a global tiger conservation success story – the tiger population across the Tarai region have improved remarkably.
Evidently, tiger densities can actually be higher in the areas where people live than in those from where they have been evicted. The tribes provide a variety of different habitats and help detect and deter poachers.
Nepal’s indigenous people who live near national parks that are tiger sanctuaries are the main contributors to tiger conservation.
These communities, including the Tharu, Bote, and Musahar people, among others who have lived alongside wildlife long before national parks were established, applying their ancestral knowledge to coexist with the natural world.
✦ The cloud forest of Mount Panié faces increasingly severe droughts worsened by climate change as well as invasive species.
To protect the trees they revere, the Indigenous Kanak people of New Caledonia – who depend on these forests for survival – are working to expand the 5,400-hectare (13,300-acre) Mount Panié Wilderness Reserve.
For the past two decades, the tribe has worked with Conservation International to establish a local organization to manage the reserve.
Now they aim to expand the reserve area to 10,000 hectares (24,710 acres) of land to protect the entire kauri tree population, especially dayu biik– which is a critically endangered subspecies of the thousand-year-old kauri trees.
✦ A modern-day effort to restore the Amazon, powered by tradition, has inspired a new way to restore the forests surrounding their land. The majority of this once-thriving region has severely degraded, converted into land for farms, and ravaged by the fires that burned through the Amazon in 2019.
To restore the forests, the indigenous people implemented a technique of sowing a large number of seeds. The group begins by sowing a large and varied mixture of seeds that yield plants native to the area, ensuring the seeds used would produce the highest yield of vegetation native to the land while restoring the soil.
Indigenous people have already helped plant enough seeds to yield more than 1.8 million trees and has seen a range of positive impacts on the region, from improved water quality to increased agricultural production.
This technique could also help tackle climate change by growing more varied native forests – which absorb considerably more carbon than forests with a single type of tree.
✦ A tribal village in Meghalaya, India, has been trying to protect a community forest where western hoolock gibbons dwell. The ever-increasing human presence is gnawing away at the 40-square-kilometer forest, endangering the hoolock gibbons.
The tribes in the region are trying to protect the species and have somewhat succeeded as there hasn’t been a single poaching incident.
The western hoolock gibbon is threatened globally. An estimated 90 percent of its population has been lost over the past 30 years due to deforestation, hunting, and government neglect.
Around 3,000 western hoolock gibbons are believed to remain, some 2,600 of them are in northeastern India, while the rest are scattered in Bangladesh and Myanmar.
✦ In the northern part of Mount Kenya resides the indigenous community of the Il Lakipick Maasai (“People of Wildlife”). The tribe owns and operates the only community-owned rhinoceros sanctuary in the country.
They have managed to eliminate the human-wildlife conflicts that arise in the area due to the intrusion of wild animals searching for water, prey, and pasture during drought.
These indigenous people have reduced bush-cutting in order to provide more fodder for wildlife on their lands, and with that strategy, they have ensured the peaceful coexistence with animals.
And there are plenty more such inspirational success stories, where indigenous tribes are helping in wildlife conservation by protecting variously threatened and endangered wildlife species.
Their lifestyle and spiritual beliefs play a major role in wildlife conservation, and it is crucial to include their contribution in the process.
Coordination with Tribal People
Coordinating with indigenous tribes that are helping in wildlife conservation is a much better way to protect the environment and wildlife. And if history tells us anything is that removing them from the equation just exacerbates the harm.
For instance, when the Maasai were removed from Ngorongoro Crater in Tanzania in 1974, poaching incidents increased; the eviction of indigenous people from Yellowstone Park in the United States in the late 19th century led to overgrazing by elk and bison; Aborigines in Australia have used controlled burning to protect forests from devastating conflagrations, but in modern times bushfires are ravaging the continent island…and the list goes on.
Indigenous people and local communities have been proactively confronting significant negative effects from global changes in climate, biodiversity, and ecosystem function. It is imperative to acknowledge and utilize the efforts of the tribes for nature.
They have been facing these challenges in partnership with each other and with an array of other stakeholders, through co-management systems and by revitalizing and adapting local management systems.
However, the knowledge and perspective of indigenous communities are absent in the global approaches to conservation.
If conservation is actually going to start working, conservationists need to ask tribal people what help they need to protect their lands, listen to them, and then be prepared to back them up as much as possible.
A major change in thinking about conservation is now urgently required to save the threatened and endangered flora and fauna.
|
ancient chinese medicine st johns wort
A team of researchers affiliated with a large number of institutions in Japan has developed a vaccine that tricks the immune system into removing senescent cells. In their paper published in the journal Nature Aging, the group describes their vaccine, how it works and how effective it was when given to test mice.
Prior research has shown that part of the aging process is the development of senescent cells—cells that outlive their usefulness but fail to die naturally. Instead, they produce chemicals that can lead to inflammation, aging and a host of other ailments. Prior research has shown that senescence occurs when cells stop dividing. Prior research has also shown that senescent cells can lead to tumor growth in some instances and tumor suppression in others. Senescence also plays a role in tissue repair, minocin akne bestellen and its impacts on the body vary depending on factors such as overall health and age. It is suspected that senescence is related to telomere erosion, and in some cases, environmental factors that lead to cell damage. In this new effort, the researchers have developed a vaccine that creates antibodies that attach to senescent cells, marking them for removal by white blood cells.
The team was able to create the vaccine after identifying a protein made in senescent cells but not in healthy active cells. That allowed them to develop a type of vaccine based on the amino acids in the protein. When injected, the vaccine incites the body to produce antibodies that bind only to senescent cells, and that sets off an immune response that involves sending white blood cells to destroy the senescent cells.
Source: Read Full Article
|
The National Plan For Teaching Swimming (NPTS) is an ‘all-inclusive programme’ which takes the non-swimmer from his or her first splash to developing confidence and competence in the water. The National Governing Body for Swimming, Swim England, has produced a national syllabus for Aquatics, in order to equip learn to swim providers with the training and tools to deliver a multi aquatic, multi skill programme.
The Swimmer’s ‘journey through aquatics’ following the NPTS will result in the development of a wide range of skills, if you like, are a jigsaw of pieces, and when put together result in a competent, confident and safe swimmer who has the skill base for then developing technique in a wide range of water based sports.
The most successful way for children to acquire these skills is through an environment involving fun and games. Games are an ideal way for children to develop their ‘jigsaw’ of skills and may even help to combine one skill with another to support the process of ‘building’ the ‘jigsaw’ which ultimately results in a stroke such as front crawl, Breaststroke, Backstroke or Butterfly as well as skills that may become transferable to another aquatic sport or land based sport. Examples include throwing and catching for developing Water polo but also transferable to sports such as netball, basketball etc, skills such as somersault may develop a swimmer’s ability to take part in Synchronised Swimming, but would also be in gymnastics. There are many examples of transferable skills, however without basic water competence it would be impossible for a swimmer to reach this stage.
A child develops their basic movement range during the ages of 5 years to 8 years for females, and 6 years to 9 years for males. During this stage of ‘growing up’, children should be taking part in activity that builds their FUNdamental movement skills, with the emphasis being learning through fun. With reference to this, children need to build up skills that fall under specific categories; in aquatics these are aquatic specified.
The swimmers ‘journey through aquatics’ starts with FOUNDATION, a programme for developing early years water confidence, which is encouraged through sessions such as ‘parent and baby’ and ‘pre-school’ sessions. The emphasis is on the development of very basic motor skills and introduction of water and the swimming environment through fun and games.
The next stage along the ‘journey’ takes the swimmer through ‘ FUNDAMENTAL MOVEMENT SKILLS, STAGES 1 - 7 of the NTPTS.
Listed below are the stages with a brief overview for each stage.
Developing basic safety awareness, the 'class' scenario, basic movement skills and water confidence skills. Swimmers may use aids, e.g. arm bands, floats etc.]
Developing safe entries to the water, including jumping in, basic floating, travel and rotation unaided unaided to regain upright positions. Swimmers may use aids, e.g. arm bands, floats etc.
Developing safe entries including submersion, travel up to 10 metres on the front and back, progress rotation skills and water safety knowledge.
Developing the understanding of buoyancy through a range of skills, refining kicking technique for all strokes, and swimming 10 metres to a given standard as directed by the ASA ]
Developing 'watermanship'through sculling and treading water skill, and complete rotation, also performing all strokes to the given standard as directed by the ASA.
Developing effective swimming skills including coordinated breathing; developing the water safety aspects and understanding of preparation for exercise.]
All strokes must be proficient and adhere to ASA law. Improving efficiency of strokes over an increasing distance to improve stamina. Water skills include starts, turns and finish]
Developing a wide range of diving and water entries, including sitting dives, straddle and tuck jumps and competitive diving. Divers will learn how to dive safely, efficiently and to ASA law. ]
Introduction to 'synchro strokes', skills and laws. Children will get the opportunity to create their own short routines to music, which allows for creativity and expression whist exercising.]
Introduction to water polo specific strokes, balls skills, rules and stamina. Water polo is a non contact 7 a side team game which require a lot of skill, technique and stamina. Highly recommended for those who enjoy team games and fun.
|
Welding is one of the most fundamental manufacturing processes, affording an efficient, versatile, and cost-effective method of joining two pieces of metal.
Though there are many different welding techniques, each presenting its own benefits, the core concept of each process is the same—a welder applies controlled pressure and heat to create a firm bond between two components.
Despite rapid advancements in machining and metalwork processes, welding remains a mainstay of virtually every industry that relies on metalwork. As common as it is, however, welding requires a high degree of technical ability.
An experienced welding professional can not only guide you toward the correct welding process for your project, but also execute that technique with precision and skill.
See our full gallery for videos & past projects in Welding.Gallery
Most Common Welding Techniques
Welding techniques are mainly differentiated by the equipment and gases used, as well as by the characteristics of the resulting bonds. Three of the most common options are:
- Tungsten Inert Gas (TIG) Welding, also known as Gas Tungsten Arc Welding (GTAW)
- Metal Inert Gas (MIG) Welding, also known as Gas Metal Arc Welding (GMAW)
- Shielded Metal Arc Welding (SMAW), also known as Stick Welding
TIG welding is extremely versatile but requires a high degree of technical competence. The welder must simultaneously feed a rod while also operating a TIG torch. The electrode on this torch heats the base metal to form weld beads that hold the two pieces together.
TIG welding is one of the most popular techniques today, allowing for clean welds with high degrees of purity. TIG welding also creates an aesthetically pleasing finish thanks to the welding beads, and it is useful for many industries and applications due to the range of suitable materials—which includes most conventional metals and alloys.
MIG welding is a comparatively simple technique that is nonetheless valued for its ability to create extremely strong bonds with minimal waste. The technique involves feeding an electrode through a welding gun to form an electric arc between the base material and the electrode. This arc is what heats the material sufficiently to melt and join two pieces.
MIG welding is popular in the automotive and maritime industries because of the strength of its bonds. It is also useful for joining thinner materials, especially when using bare wire rather than a flux core.
Stick welding is one of the more traditional welding techniques, although it still retains benefits for many modern applications. It is a lower-cost, portable alternative to the other methods, and it can be used with trickier base materials—such as rusted metals—that would prevent a MIG or TIG approach.
Like MIG, stick welding uses a consumable electrode that is melted by an electric arc. Unlike MIG welding or TIG welding, however, stick welding does not require the use of a shielding gas, which is otherwise used to protect the weld puddle as it forms. Stick welding does create a byproduct layer of slag that must be removed, which presents an additional step when compared to other techniques. Otherwise, it is a relatively simple process that holds its weight with heavy industrial jobs.
Industry Leaders in Welding and Fabrication
Because welding is so ubiquitous, we have experience working with a vast array of industries. Some of our most frequently served sectors include:
- Institutional equipment
The success of any welding project depends heavily on the ability and knowledge of the welder. No matter how complex your needs, our skilled team of fabricators will work with you to identify and apply the techniques that best suit your materials, budget, and timeline.
Contact us to learn more about the advantages of collaborating with industry-leading professionals on your next metalwork project.
|
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Shiromani Akali Dal
Shiromani Akali Dal (SAD), English Supreme Akali Party, also called Akali Dal, regional political party in Punjab state, northwestern India. It is the principal advocacy organization of the large Sikh community in the state and is centred on the philosophy of promoting the well-being of the country’s Sikh population by providing them with a political as well as a religious platform. The party also has a presence on the national political scene in New Delhi.
The precursor to the present-day SAD was an organization established in December 1920 to help guide the quasi-militant Akali movement of the early 1920s, in which Sikhs demanded and (through the Sikh Gurdwara Act of 1925) won from the ruling British authorities in India control over the gurdwaras (Sikh houses of worship). The present-day SAD, which has claimed to be the oldest regional political party in India, has also controlled Sikh religious institutions such as the Shiromani Gurdwara Prabandhak Committee (SGPC) and, more recently, the Delhi Sikh Gurdwara Management Committee. From the mid-1920s the SAD was a part of the Indian independence movement, and its members participated in the protests and civil-disobedience programs (satyagraha) of Mohandas K. Gandhi and the Indian National Congress (Congress Party). Although the SAD remained committed to the broader objectives of Indian independence from Britain, its primary mission remained the promotion and protection of the rights of the Sikh minority.
The SAD first contested elections as a political party in 1937, after the Government of India Act of 1935 had authorized the creation of provincial assemblies in British India. With Indian independence achieved in 1947, the SAD spearheaded the movement to create a separate state for the Punjabi-speaking and largely Sikh populace of northwestern India. The movement finally realized its goal when the state of Punjab was divided in 1966, the southeastern portion of it becoming the predominantly Hindi-speaking state of Haryana.
In 1967, in the first legislative assembly elections for the newly configured Punjab state, the SAD won fewer than one-fourth of the total number of seats but was able to cobble together a broad coalition of non-Congress parties to form the state government. Conflicts and power struggles within the party, however, led to the government’s fall within months. In the 1969 assembly elections, the SAD won more seats than it had in 1967, but it was still short of a majority and again formed a coalition government—this time with the Bharatiya Jana Sangh party (a pro-Hindu forerunner of the Bharatiya Janata Party [BJP]). That government was also short-lived, again marked by intraparty fighting and frequent leadership changes that culminated in the dissolution of the government in mid-1971 and a period of rule by the central government in New Delhi. The SAD lost badly in the 1972 assembly elections, and the Congress Party, with a majority of seats, formed the government.
Over the next several years, the SAD attempted to rebuild and to reestablish itself as the sole representative of the Sikh community. The party nonetheless underwent divisions, with several splinter groups claiming the mantle of the true SAD. The party did win a majority of seats in the 1977 state assembly elections and formed a government, with Parkash Singh Badal as chief minister (head of government). It was Badal’s second term in the office, as he had served in 1970–71, during the first SAD-led government.
The party again lost to Congress in the 1980 state assembly elections. Also at that time, a growing number of Sikhs were agitating for greater autonomy, and some were resorting to violent means to promote their demands. In 1982 the main militant leader, Jarnail Singh Bhindranwale, and his armed followers occupied the Harmandir Sahib (Golden Temple) in Amritsar. They were forcefully evicted in June 1984 by the Indian military, and Bhindranwale was killed during the operation. There followed a period of violence in Punjab and elsewhere in India that included the assassination of Prime Minister Indira Gandhi by her Sikh bodyguards at the end of October.
Despite continuing factionalism in the SAD, the party won a large majority of seats in the 1985 assembly elections and formed a government in the state that lasted for almost two years before central rule from New Delhi was reimposed. The party boycotted the 1992 assembly elections, and the Congress Party emerged victorious. Meanwhile, Badal, leader of the largest of the various SAD factions, became president of the party in 1996. The party won another large majority of seats in the 1997 assembly elections and formed the government, with Badal serving his third term as chief minister. After again losing to Congress in the 2002 assembly polls, the SAD—in alliance with the BJP—won in 2007; Badal commenced his fourth term as chief minister. The alliance retained power in 2012, with Badal continuing as chief minister. However, in 2008 he had stepped down as president of the party and been succeeded in that post by his son, Sukhbir Singh Badal.
The SAD maintained a modest presence in the Lok Sabha (lower chamber of the Indian parliament), often consisting of only a small handful of seats from Punjab constituencies. Its highest seat total was nine in the 1977 elections, and it garnered eight in the 1996, 1998, and 2004 contests. The party’s total was reduced to four seats in both the 2009 and 2014 elections. For many years the party remained unaligned with any of the national parties, but in 1998 it joined the BJP-led National Democratic Alliance coalition that ruled the country from 1998 to 2004. During that time the SAD was able to exert some influence on policy at the national level, especially with regard to India’s relations with Pakistan, with which Punjab shared a long international border. The party maintained its alliance with the BJP into the 21st century, and, following the BJP’s landslide win in 2014, SAD member Harsimrat Kaur Badal (the wife of Sukhbir Singh Badal) was named to the cabinet of Prime Minister Narendra Modi.
Learn More in these related Britannica articles:
India: The transfer of power and the birth of two countriesA Sikh Akali Dal (“Party of Immortals”), which was started in 1920, led militant marches to liberate
gurdwaras (“doorways to the Guru”; the Sikh places of worship) from corrupt Hindu managers. Tara Singh (1885–1967), the most important leader of the vigorous Sikh political movement, first raised the demand…
Sikhism: The Punjabi suba…principal Sikh political party, the Shiromani Akali Dal (Supreme Akali Party), the government unwisely enlisted the support of a young Sikh fundamentalist, Jarnail Singh Bhindranwale. In 1984 Bhindranwale and his armed followers occupied the Akal Takht in the Golden Temple complex in Amritsar. In response, Indian Prime Minister Indira Gandhi…
Punjab: History…1980s militant factions of the Shiromani Akali Dal (Supreme Akali Party) and the All India Sikh Students’ Federation were demanding the establishment of an autonomous Sikh homeland, or Khalistan (“Land of the Pure,” a term introduced as early as 1946 by Tara Singh). In order to attain their goal, those…
|
One of the most important things any educator can do is foster an environment where learning and playing are nearly synonymous. Lessons that are fun and engaging are far more likely to result in better outcomes, both in the short and long terms. Game-based learning can promote a desire to learn outside the classroom while transforming the classroom from a place a student must be to a place they want to be.
Increasing Focus and Improving Engagement
Essential to any learning exercise, keeping a student focused and engaged can be much more difficult than it sounds. Kids can memorize hundreds of cards for the latest card-battle game, but have trouble retaining important information long enough to pass a test. Some of this lag comes down to effort; there are incentives to learn the characters of their favorite games. In a student’s eyes, the same cannot necessarily be said for grammar facts, mathematical equations, and other less-than-exciting subjects. Gameplay can draw a student’s focus, and keep their attention, much longer and more easily than boring words on the pages of a book.
Fostering Creativity in the Classroom
A forced, stilted approach to teaching every student in the same way has begun to give way to a more nuanced, inclusive style. This changing landscape affords educators a greater opportunity to foster creativity in the classroom. Using games for learning is an important tool in allowing a student to find their own style of learning. And while not every player will respond in the same way to games, or even every outcome of those games, using games for learning can give educators a way to help students come up with creative resolutions to problems, whether in the game or as part of other classroom situations.
Learning Through Play
There is evidence that games can have positive effects on reading, reasoning skills, and mathematics achievement. Educators have already begun exploring the potential of games for learning, and in the process have helped students explore knowledge in new and fun ways. Using games for learning can help foster greater creativity among students who may not have otherwise engaged with a variety of lessons. With the right game, educators can really tap into the learning potential of every student. However, as effective as gameplay can be, much depends on the construction of the game and the educator’s ability to use it effectively. Games can be a useful tool, but educators must be willing to be flexible with lessons and avoid setting rigid goals.
Read More About the Potential of Games for Creative Learning
- How ‘Dungeons & Dragons’ Primes Students for Interdisciplinary Learning, Including STEM – Paul Darvasi
- Learning by Tinkering – Matthew Farber
- The Minecraft Effect – Mimi Ko Cruz
- Minecraft and The Future of Transmedia Learning – Barry Joseph
- Language, Gaming and Possibilities – Antero Garcia
- Level Up Learning: A National Survey on Teaching with Digital Games – Joan Ganz Cooney Center
|
Dermatitis, atopic (eczema) (See: Atopic dermatitis (eczema))
Dermatitis, cercarial (See: Swimmer’s itch)
Dermatitis, contact (See: Contact dermatitis)
Dermatitis, scratch (See: Neurodermatitis)
Dermatitis, seborrheic (See: Seborrheic dermatitis)
Dermatitis is a general term that describes an inflammation of the skin. Dermatitis can have many causes and occurs in many forms. It usually involves an itchy rash on swollen, reddened skin.
Skin affected by dermatitis may blister, ooze, develop a crust or flake off. Examples of dermatitis include atopic dermatitis (eczema), dandruff and rashes caused by contact with any of a number of substances, such as poison ivy, soaps and jewelry with nickel in it.
Dermatitis is a common condition that’s not contagious, but it can make you feel uncomfortable and self-conscious. A combination of self-care steps and medications can help you treat dermatitis.
Each type of dermatitis may look a little different and tends to occur on different parts of your body. The most common types of dermatitis include:
- Atopic dermatitis (eczema). Usually beginning in infancy, this red, itchy rash most commonly occurs where the skin flexes — inside the elbows, behind the knees and the front of the neck. When scratched, the rash can leak fluid and crust over. People with atopic dermatitis may experience improvement and then flare-ups.
- Contact dermatitis. This rash occurs on areas of the body that have come into contact with substances that either irritate the skin or cause an allergic reaction, such as poison ivy, soap and essential oils. The red rash may burn, sting or itch. Blisters may develop.
- Seborrheic dermatitis. This condition causes scaly patches, red skin and stubborn dandruff. It usually affects oily areas of the body, such as the face, upper chest and back. It can be a long-term condition with periods of remission and flare-ups. In infants, this disorder is known as cradle cap.
When to see a doctor
See your doctor if:
- You’re so uncomfortable that you are losing sleep or are distracted from your daily routines
- Your skin becomes painful
- You suspect your skin is infected
- You’ve tried self-care steps without success
Stung by a Plant
A number of health conditions, allergies, genetic factors and irritants can cause different types of dermatitis:
- Atopic dermatitis (eczema). This form of dermatitis is likely related to a mix of factors, including dry skin, a gene variation, an immune system dysfunction, bacteria on the skin and environmental conditions.
- Contact dermatitis. This condition results from direct contact with one of many irritants or allergens — such as poison ivy, jewelry containing nickel, cleaning products, perfumes, cosmetics, and even the preservatives in many creams and lotions.
- Seborrheic dermatitis. This condition may be caused by a yeast (fungus) that is in the oil secretion on the skin. People with seborrheic dermatitis may notice their condition tends to come and go depending on the season.
A number of factors can increase your risk of developing certain types of dermatitis. Examples include:
- Age. Dermatitis can occur at any age, but atopic dermatitis (eczema) usually begins in infancy.
- Allergies and asthma. People who have a personal or family history of eczema, allergies, hay fever or asthma are more likely to develop atopic dermatitis.
- Occupation. Jobs that put you in contact with certain metals, solvents or cleaning supplies increase your risk of contact dermatitis. Being a health care worker is linked to hand eczema.
- Health conditions. You may be at increased risk of seborrheic dermatitis if you have one of a number of conditions, such as congestive heart failure, Parkinson’s disease and HIV.
Scratching the itchy rash associated with dermatitis can cause open sores, which may become infected. These skin infections can spread and may very rarely become life-threatening.
Avoiding dry skin may be one factor in helping you prevent dermatitis. These tips can help you minimize the drying effects of bathing on your skin:
- Take shorter baths or showers. Limit your baths and showers to 5 to 10 minutes. And use warm, rather than hot, water. Bath oil also may be helpful.
- Use nonsoap cleansers or gentle soaps. Choose fragrance-free nonsoap cleansers or mild soaps. Some soaps can dry your skin.
- Dry yourself carefully. After bathing, brush your skin rapidly with the palms of your hands, or gently pat your skin dry with a soft towel.
- Moisturize your skin. While your skin is still damp, seal in moisture with an oil or a cream. Try different products to find one that works for you. Ideally, the best one for you will be safe, effective, affordable and unscented.
|
Children ice skating in Khakassia, Russia react to the fall of a bright fireball two nights ago on Dec.6
In 1908 it was Tunguska event, a meteorite exploded in mid-air, flattening 770 square miles of forest. 39 years later in 1947, 70 tons of iron meteorites pummeled the Sikhote-Alin Mountains, leaving more than 30 craters. Then a day before Valentine’s Day in 2013, hundreds of dashcams recorded the fiery and explosive entry of the Chelyabinsk meteoroid, which created a shock wave strong enough to blow out thousands of glass windows and litter the snowy fields and lakes with countless fusion-crusted space rocks.
Documentary footage from 1947 of the Sikhote-Alin fall and how a team of scientists trekked into the wilderness to find the craters and meteorite fragments
Now on Dec. 6, another fireball blazed across Siberian skies, briefly illuminated the land like a sunny day before breaking apart with a boom over the town of Sayanogorsk. Given its brilliance and the explosions heard, there’s a fair chance that meteorites may have landed on the ground. Hopefully, a team will attempt a search soon. As long as it doesn’t snow too soon after a fall, black stones and the holes they make in snow are relatively easy to spot.
OK, maybe Siberia doesn’t get ALL the cool fireballs and meteorites, but it’s done well in the past century or so. Given the dimensions of the region — it covers 10% of the Earth’s surface and 57% of Russia — I suppose it’s inevitable that over so vast an area, regular fireball sightings and occasional monster meteorite falls would be the norm. For comparison, the United States covers only 1.9% of the Earth. So there’s at least a partial answer. Siberia’s just big.
Every day about 100 tons of meteoroids, which are fragments of dust and gravel from comets and asteroids, enter the Earth’s atmosphere. Much of it gets singed into fine dust, but the tougher stuff — mostly rocky, asteroid material — occasionally makes it to the ground as meteorites. Every day then our planet gains about a blue whale’s weight in cosmic debris. We’re practically swimming in the stuff!
Most of this mass is in the form of dust but a study done in 1996 and published in the Monthly Notices of the Royal Astronomical Society further broke down that number. In the 10 gram (weight of a paperclip or stick of gum) to 1 kilogram (2.2 lbs) size range, 6,400 to 16,000 lbs. (2900-7300 kilograms) of meteorites strike the Earth each year. Yet because the Earth is so vast and largely uninhabited, appearances to the contrary, only about 10 are witnessed falls later recovered by enterprising hunters.
A couple more videos of the Dec. 6, 2016 fireball over Khakassia and Sayanogorsk, Russia
Meteorites fall in a pattern from smallest first to biggest last to form what astronomers call a strewnfield, an elongated stretch of ground several miles long shaped something like an almond. If you can identify the meteor’s ground track, the land over which it streaked, that’s where to start your search for potential meteorites.
Meteorites indeed fall everywhere and have for as long as Earth’s been rolling around the sun. So why couldn’t just one fall in my neighborhood or on the way to work? Maybe if I moved to Siberia …
|
Jury – Rig (also known as Jury Rigging) was first known to be used in 1788 and its origin is from sailing ships.
Jury… Meaning; To make use of any material at hand to make repair.
Rig… Meaning; To make repairs to the ships rigging
More at Wikipedia
From AC6V’s website:
jury-rig is based on one word “jury” which is a nautical sense meaning ‘makeshift; temporary’ and one word “rig” referring to a ship’s sails and masts. The first known example of this “jury” is the compound jury-mast, ‘a temporary mast put up to replace one that has been broken or lost’, attested since the early seventeenth century. A jury-rig, then, is ‘a temporary or makeshift rigging’, and the verb is used figuratively in the sense ‘to assemble or arrange hastily in a makeshift manner’. The origin of this word “jury” is not certain, but some scholars identify it with iuwere, a late Middle English word meaning ‘help; aid’, borrowed from the Old French ajurie
|
Buddhism and Jainism
Causes of Origin
- The Kshatriya reaction against the domination of the priestly class called Brahmanas. Mahavira and Gautama Buddha, both belonged to the Kshatriya clan.
- Indiscriminate killing of cattle for Vedic sacrifices and for food had led to the destabilization of the new agricultural economy which was dependent on cattle for ploughing the fields. Both Buddhism and Jainism stood against this killing.
- The growth of cities with the increase in the circulation of Punch Marked coins and trade and commerce had added to the importance of Vaishyas who looked for a new religion to improve their position. Jainism and Buddhism facilitated their needs
- The new forms of property created social inequalities and the common people wanted to get back to their primitive form of life
- Growing complexity and degeneration of Vedic religion.
Difference between Jainism and Buddhism and Vedic Religion
- They did not attach any importance to the existing Varna system
- They preached the Gospel of non-violence
- They accepted Vaishyas, including the Moneylenders who were condemned by Brahmanas
- They preferred simple, puritan and ascetic living
Gautama Buddha and Buddhism
Gautama Buddha was born in 563 BC in the Republican clan of Shakyas in Lumbini near Kapilavastu. His mother was a princess from Kosalan dynasty.
Four Sights of Buddha’s life at the age of 29 had moved him to the path of renunciation. They are
- An old man
- A diseased person
- An ascetic
- A dead person
Important events in the life of Buddha
Lotus and Bull
Doctrines of Buddhism
- Four noble truths
- Dukha – life is full of sorrow
- Samyuda – there are causes for the sorrow
- Nirodha – they can be stopped
- Nirodha gamini Pratipada – Path leading towards the cessation of sorrow
- Ashtangika Marga
- Right observation
- Right determination
- Right exercise
- Right action
- Right speech
- Right memory
- Right meditation
- Right livelihood
- Madhya Marga – to avoid the excess of both luxury and austerity
- Triratna – Buddha, Dharma and Sangha
Special features of Buddhism and the causes of its spread
- Buddhism does not recognize the existence of god and soul
- Women were also admitted to the Sangha. Sangha was open to all, irrespective of caste and sex
- Pali language was used which helped in the spread of Buddhist doctrines among the common people
- Ashoka embraced Buddhism and spread it to Central Asia, West Asia and Srilanka
- Buddhist Councils
First Council: The first council was held in the year 483 B.C at Saptaparni caves near Rajgriha in Bihar under the patron of king Ajatshatru, during the first council two Buddhist works of literature were compiled Vinaya and Sutta Pitaka by Upali
Second Council: The second council was held in the year 383 B.C at Vaishali under the patron of king Kalashoka
Third Council: The third council was held in the year 250 B.C at Patliputra under the patron of King Ashoka the Great, during the third council Abhidhamma Pitaka was added and Buddhist holy book Tripitaka was compiled.
Fourth Council: The fourth council was held in the year 78 A.D at Kundalvan in Kashmir under the patron of king Kanishka, during this council Hinayana and Mahayana were divided.
Causes of the decline of Buddhism
- Buddhism succumbed to the rituals and ceremonies which it had originally denounced
- They gave up Pali and took Sanskrit. They began to practice idol worship and received numerous offerings from devotees
- Monasteries came under the domination of ease-loving people and became the centre of corrupt practices
- Vajrayana form started to develop.
- Buddhists came to look upon women as objects of lust.
Importance and influence of Buddhism
- Sutta Pitaka – Buddha’s sayings
- Vinaya Pitaka – Monastic code
- Abhidhamma Pitaka – religious discourses of Buddha
- Milindapanho – dialogue between Menander and Saint Nagasena
- Dipavamsha and Mahavamsha – the great chronicles of Sri Lanka
- Buddhacharita by Ashvagosha
- Hinayana (Lesser Wheel) - They believe in the real teachings of Gautam Buddha of attaining Nirvana. They do not believe in idol worship and Pali language was used in the Hinayana text
- Mahayana (Greater Wheel) - They believe that Nirvana is attained by the grace of Gautam Buddha and following Boddhisattvas and not by following his teachings. They believe in idol worship and Sanskrit was used in Mahayana text
- Vajrayana - They believe that Nirvana is attained by the help of magical tricks or black magic.
- Avalokitesvara or Padmapani
- Maitreya (Future Buddha)
- Places of Worship – Stupas containing the relics of Buddha or Bodhisattvas. Chaityas are the prayer hall while Viharas are the place of residence of monks
- Development of Cave architecture eg. Barabar caves in Gaya
- Development of Idol worship and sculptures
- The growth of universities of par excellence which attracted students from all over the world
- Jainism believes in 24 Tirthankaras with Rishabdev being the first and Mahavira, contemporary of Buddha being the 24th Tirthankara.
- The 23rd Tirthankar Parshwanath (Emblem: Snake) was the son of King Ashvasena of Banaras.
- The 24th and the last Tirthankar was Vardhman Mahavira (Emblem: Lion).
- He was born in Kundagram (Distt Muzaffarpur, Bihar) in 599 BC.
- His father Siddhartha was the head of Jnatrika clan. His mother was Trishla, sister of Lichchavi Prince Chetak of Vaishali.
- Mahavira was related to Bimbisara.
- Married to Yashoda, had a daughter named Priyadarsena, whose husband Jamali became his first disciple.
- At 30, after the death of his parents, he became an ascetic.
- In the 13th year of his asceticism (on the 10th of Vaishakha), outside the town of Jrimbhikgrama, he attained supreme knowledge (Kaivalya).
- From now on he was called Jaina or Jitendriya and Mahavira, and his followers were named Jains.
- He also got the title of Arihant, i.e., worthy. At the age of 72, he attained death at Pava, near Patna, in 527 BC.
Five vows of Jainism
- Ahmisa – non-violence
- Satya – do not speak a lie
- Asteya – do not steal
- Aparigraha – do not acquire property
- Brahmacharya – celibacy
Three main principles
Triratna of Jainism
- Right faith – Samayak Shradha
- Right Knowledge – Samayak Jnan
- Right Conduct – Samayak karma
Five types of knowledge
- Mati jnana
- Shruta jnana
- Avadhi jnana
- Manahparayaya Jnana
- Keval Jnana
- 1st Council at Patliputra under the Patron of Chandragupta Maurya in 300 BC during which the 12 Angas were compiled
- 2nd Council at Vallabhi in 512 AD during which the final compilation of 12 Angas and 12 Upangas was done
- Shwetambars – Sthulabhadra – People who put on white robes. Those who stayed back in North during the times of famine
- Digambars – Bhadrabahu – Exodus of monks to Deccan and South during the times of Magadhan famine. They have a naked attire
Jain literature used Prakrit, which is a common language of people than using Sanskrit. In this way, Jainism reached far and wide through people. The important literary works are
- 12 Angas
- 12 Upangas
- 10 Parikramas
- 6 Chhedsutras
- 4 Mulasutras
- 2 Sutra Granthas
- Part of Sangam literature is also attributed to Jain scholars.
|
Picture from CERN 2-metre hydrogen bubble chamber exposed to a beam of negative kaons () with energy 4.2 GeV. This piece corresponds to about 50 cm in the bubble chamber.
The spectacular knock-on spiralling electron shows that negative particles curve to the right. The collision between the beam and the target proton produces two outgoing charged tracks, one positive and one negative.
Educationally, this event is of interest because it provides a way of discussing momentum conservation in a qualitative way. Both outgoing tracks have low momentum compared with that of the beam (they are much more curved); one goes to the right and the other to the left of the beam. So, to balance momentum, we need one or more neutral particles whose combined momentum is nearly that of the beam.
|
TAL-effectors are modular, DNA-reading proteins that can be used to edit DNA in living cells
Nature is full of surprises, and sometimes you can find treasures hidden in the most unlikely places. A few years ago, scientists found one of these treasures in a bacterium that attacks plants: a modular protein that can read the sequence of nucleotides in DNA. Structural understanding of this protein opens the door to all manner of applications in medicine and biotechnology. We can now customize a protein to read any DNA sequence that we desire, and thus target the protein to specific places in a genome. Already, these sequence-reading proteins are being used to create the tools for genome editing, as a possible way to correct genetic diseases.
A TALE of Battle
These proteins are termed TAL effectors, short for transcription activator-like effectors. Several types of bacteria inject these proteins into plant cells, where they travel to the nucleus and activate genes that make the plant more susceptible to infection. Some types of these bacteria just build a few TAL effectors, others build and inject several dozen. As is often the case, however, some plant cells have evolved a way to fight back, and when they're injected with TAL effectors, they instead activate specific resistance genes that protect the plant.
Modular DNA Readers
TAL effectors are composed of small modules, about 34 amino acids long, that are repeated many times in a row. Each of these modules reads one nucleotide when the TAL effector binds to DNA. The protein shown here, which is from a bacterium that infects rice, has 23 of these modules. The crystallographic structure (PDB entry 3ugm
) includes the DNA-binding portion. The entire protein also includes a portion that targets the protein to the nucleus and another portion that activates genes once it gets there.
In the few short years since their discovery, researchers have put these DNA-reading proteins to good use. For instance, they have engineered TALE nucleases (TALEN) by attaching the DNA-cutting domain of FokI nuclease (PDB entry 1fok
, shown here in green) to one end of a TAL effector. FokI needs to form a dimer to cut DNA, so the TALEN only becomes active when two of them bind to the proper DNA target sequence. Then, it breaks both strands. This can be used to knock out a specific gene, or to stimulate the natural DNA repair methods that are present in cells, which may be coaxed into inserting a new engineered gene while they are making the repair. This approach has been so successful that researchers are also exploring other types of molecules to recognize the DNA, such as zinc fingers and CRISPR.
Exploring the Structure
TAL Effector and DNA (PDB entry 3v6t)
By comparing TAL effectors from different bacteria, researchers have discovered modules for reading each of the bases in DNA, as well as a few modified forms of the bases. Each module is composed of a little bundle of two alpha helices. An amino acid at the inner edge of the module does the reading by contacting the edge of the nucleotide in the DNA, and a neighboring amino acid (not shown here) helps to position it. PDB entry 3v6t includes an engineered TAL effector with three types of modules. A module with aspartate, shown in red, forms a specific interaction with cytosine. However, a smaller glycine, shown in blue, makes a favorable interaction with the large methyl group of thymine. Serine, shown in green, forms a bond with adenine. Additional research is uncovering other modules to read the remaining base, guanine. To explore these modules and the entire structure in more detail, click on the image for an interactive JSmol.
Topics for Further Discussion
- You can see the structure of an engineered TAL effector before it binds to DNA in PDB entry 3v6p.
- Each TAL effector module is composed of two alpha helices with a small kink in one, which improves the packing between the two. Try displaying all prolines in these TAL effector structures to see the kink. Each module also includes a lysine and a glutamine that form non-specific interactions with the DNA backbone--see if you can find them.
Related PDB-101 Resources
- D. Deng, C. Yan, J. Wu, X. Pan & N. Yan (2014) Revisiting the TALE repeat. Protein Cell 5, 297-306.
- E. L. Doyle, B. L. Stoddard, D. F. Voytas & A. J. Bogdanove (2013) TAL effectors: highly adaptable phytobacterial virulence factors and readily engineered DNA-targeting proteins. Trends in Cell Biology 23, 390-398.
- T. Gaj, C. A. Gersbach & C. F. Barbas (2013) ZFN, TALEN, and CRISPR/Cas-based methods for genome engineering. Trends in Biotechnology 31, 397-405.
- 3v6t, 3v6p: D. Deng, C. Yan, X. Pan, M. Mahfouz, J. Wang, J. K. Zhu, Y. Shi & N. Yan (2012) Structural basis for sequence-specific recognition of DNA by TAL effectors. Science 335, 720-723.
- 3ugm: A. N. S. Mak, P. Bradley, R. A. Cernadas, A. J. Bogdanove & B. L. Stoddard (2012) The crystal structure of TAL effector PthXo1 bound to its DNA target. Science 335, 716-719.
December 2014, David Goodsell
|
The year is 1450 CE. A messenger chewing coca leaves for energy is hurrying down a well-paved road in the Andes, delivering tribute and records from a distant corner of the Incan Empire to officials at the capital. Both the tribute and records are made of woven cloth. The tribute--a symbol of a provincial potentate's fealty to the emperor--consists of several intricate textiles that represent the cosmology of the Incas and the central role of the emperor with geometric patterns, colors, and animal symbols. Some of these cloths will adorn civil and military elites, others may be burnt as religious offerings, and many will be used to clothe the revered and mummified dead. The official records contain information on the quantities of various stored goods, the services provided, and tributes paid. All of this data is stored in bundles of knotted strings, known as khipus, that will be stored at the capital for future reference.
Weaving held (and continues to hold) an unparalleled degree of importance among the indigenous peoples of the Andes, who have woven cotton since 2000 BCE and camelid wool by 1000 BCE, and textiles have been integral to the societies of all four major Andean polities (the Recuay, Huari, Tuwanaku, and Inca). The thread that the Incas spun was finer than most machines can do today and the immense amount of labor required to make a single textile was almost as important to its value as the end product (Rodman and Cassman 1995).
As with most other aspects of pre-Columbian societies, the Spanish conquest did much to alter both how woven goods were made as well as what they symbolized to Andean Indians. Khipus like those seen in the sources were burned by the thousands and the intricacies of their "woven" language has been forgotten. According to Karen B. Graubart, a historian of Latin America, the Spanish conquest even reshaped how Andean societies divided labor between men and women. Due to Spanish assumptions about gender roles and new demands for labor and textiles, weaving, once the task of both sexes (and sometimes even exclusively male), became relegated to the distaff and associated with low social status (Graubart 2000).
Despite the changes brought by the Spanish, homemade cloth remains central to the rituals, cosmology, and identity of rural populations in the Andes. Animal symbols and patterns, though changing with contemporary fashions and synthetic dyes, continue to reflect the structure of the universe and the dualism (e.g. between light and dark) that is a recurring theme in the Andean worldview (as in the source "Feather and Cotton Shirt"). Anthropologist Katharine E. Seibold held that these modern textiles express nativism, pride in their Incan heritage and its spiritual beliefs. She argues that it does not matter whether or not modern weaving actually is Incan in origin; what is important is that in the contemporary sense it is considered Incan, a way to identifying themselves with indigenous, not western, traditions (Seibold 1992).
Similarly, khipus are still actively preserved in a few Peruvian villages, although their meaning remains largely unknown. The difficulty, according to anthropologist Frank Salomon, is that the khipus are written in an inscribed language, one that does not relate directly to any actual spoken language and is entirely self-referential. Although the numerical system of many Incan khipus (explained in more detail in the sources) has been decoded, the stories they might tell are unknown (Salomon 2004).
The textiles and khipus symbolically weave order from chaos, a central element of Andean cosmology. Although globalization and the (post) modern world threaten Andean weaving practices, the indigenous peoples who continue to create, admire, and preserve these beautiful and meaningful items consider themselves part of an ancient and vital tradition, one that both expresses and embodies the universe and their place within it.
Questions for further exploration:
1. Khipus have been the subject of much scrutiny and debate by recent anthropologists and historians, yet Spanish chroniclers in the sixteenth century made little effort to understand how they worked. What cultural assumptions led the Spanish of the early colonial era to disregard, indeed destroy, khipus? What cultural assumptions prompt modern scholars to scour them for meaning?
2. Weaving, a technology practiced for over 4000 years in the Andes, still expresses and reproduces Andean culture and cosmology and is a source of pride both as a symbol of identity and a technological achievement. Modern technologies and sciences, such as nuclear power, physiology, and biotechnology, are also sources of pride for many Latin American countries. How is the social significance of weaving similar and different to that of these modern practices? Use this topic's sources and specific examples.
3. What impact has modern technology, such as manufactured wool and synthetic dyes, had on textile culture in the Andes?
4. What has been (and will be) the impact of the tourist industry on Andean textiles, both in terms of production and imagery?
Brokaw, Galen. "The Poetics of Khipu Historiography: Felipe Guaman Poma de Ayala's Nueva coronica and the Relacion de los quipucamayos." Latin American Research Review. 38: 3 (October 2003): 111-147.
Graubart, Karen B. "Weaving and Construction of a Gender Division of Labor in Early Colonial Peru." American Indian Quarterly. 24: 4 (Autumn 2000): 537-561.
Heckman, Andrea M. Woven Stories: Andean Textiles and Rituals. Albuquerque: University of New Mexico Press, 2003.
Rodman, Amy Oakland and Vicki Cassman. "Andean Tapestry: Structure Informs the Surface." Art Journal. 54: 2, Conservation and Art History (Summer 1995): 33-39.
Salomon, Frank. The Cord Keepers: Khipus and Cultural Life in a Peruvian Village. Durham: Duke University Press, 2004.
Seibold, Katharine E. "Textiles and Cosmology in Choquecancha, Cuzco, Peru." In Andean Cosmologies Through Time: Persistence and Emergence. Eds. Robert V.H. Dover, Katharine E. Seibold, and John H. McDowell. Bloomington: Indiana University Press, 1992.
Urton, Gary. "From Knots to Narratives: Reconstructing the Art of Historical Record Keeping in the Andes from Spanish Transcriptions of Inca Khipus." Ethnohistory. 45: 3 (Summer 1998): 409-438.
|
Read words with inflectional endings.
Know and apply grade-level phonics and word analysis skills in decoding words.
No resources have been tagged as aligned with this standard.
Distinguish between similarly spelled words by identifying the sounds of the letters that differ.
Decode regularly spelled two-syllable words with long vowels.
Demonstrate basic knowledge of one-to-one letter-sound correspondences by producing the primary sound or many of the most frequent sounds for each consonant.
Associate the long and short sounds with common spellings (graphemes) for the five major vowels.
|
This process involves multiplying the two extremes how to name a poem in an essay and then comparing that product with the product of the means. 5 : one operation research-simplex method procedure and solved problems basic problem solving situation idea that always works for solving solving proportions problems proportions is to first find the unit rate, christmas homework pass and then multiply that to thesis research proposal example get what is asked. find the age of the youngest boy. solving proportions problems solve each proportion…. this is called a rate and is a type of ratio. write the ratio of write a short paragraph about yourself the story 2 same format new story from above. in proportion. some counter argument essay examples math questions on the act will involve ratios and proportions. for instance, if you’ve learned about straight-line equations, then you’ve learned about the slope solving proportions problems of a solving proportions problems straight line, and how this slope is sometimes referred to best music for writing essays should minimum wage be raised essay as being “rise over run” dec 17, 2013 · if you know that two ratios are proportional, you can actually use this information to find the length of a missing side. 3.1k solving proportions: this tutorial shows you how to take a words problem and turn it into a percent proportion. use a variable to represent the unknown quantity. for example, say you have a model of the house writing example essay you are in, problem solving thesaurus and you want to author: 9 = a step 3 verify the solution.
|
Kids hate poetry. Well, not all kids, but by the time students entered my 9th Grade English class their feelings for poetry were typically between the levels of nonexistent to complete disdain. Students think poetry is difficult to understand, not relevant to their lives, or in a form that is not what they normally read or write.
Poetry depends on the effort of the reader.
Unlike a lengthy novel or even this blog post which allows me to write, explain, and use as much space as needed, poetry is intentional, compact, and demands an enhanced awareness from the reader. Educators can help students unlock the meaning of poems, which I believe, helps to change the negative perception of poetry into a positive one.
- Notice the poet and title – what clues do they provide to help the reader understand the poem?
- Identify form or visual clues – how many lines does the poem contain? (14 lines and looks like a square it is probably a sonnet) Is the structure familiar? Punctuation, font differences, stanzas, line placement (does the poem have a shape?) How could the form relate to the content?
After collecting initial thoughts based on the “Before Reading” preview of the poem, students should:
- Read the poem multiple times
- Read the poem out loud – your ears will pick up more than just reading it in your mind, does sound play an active role in the poem’s meaning?
- Marginalia – annotate and make notes in the margins
- Look up words that are unknown – every word that is in a poem is meant to be there. If a student does not know what a specific word means to have them look it up. Why did the author choose that specific word? How does knowing the definition of the word change what I am thinking?
- Identify the speaker and situation – The speaker of the poem is not always the poet. What do I know about the speaker of this poem? Situation deals with time, location, and event. While a reader may not be able to identify all parts of the situation, the more one can identify aids into the understanding of the poem as a whole.
- Identify tone
- Notice rhythm and rhyme scheme – how is understanding enhanced?
- Identify figurative language – imagery, metaphors, enjambment, slant rhyme, alliteration; how does the poet play with language and how does it enhance a reader’s understanding?
- Notice the structure – Does the poem tell a story? Ask and answer a question? Structured like a speech or letter?
- Reread margin notes
- Reflect on notes, sound, information about the poem
- Shared inquiry discussion with classmates
Providing students guidance and modeling on how readers unlock a poem’s meaning is a daunting task. Students should not be required to analyze and interpret every poem they read. Sometimes it is best to just read poems aloud to students, allowing them to appreciate the sound and interpret the poem holistically. In my own classroom, I would model these strategies of interpreting poetry for students before expecting them to do them on their own. We would read, write, and listen to all types of poems, some to unlock the meaning, others because I wanted them to hear some of my personal favorites. We would discuss poetry’s relationship to their lives, parallels to music, or current books they were reading all in verse. I wanted to reawaken their love of poetry, or at least open to giving it another chance.
When students become aware of intentional writing in poetry it enhances their awareness in the world. They begin to notice small nuances in what they see, read, watch, and hear and how these noticings amplify understanding of the world around them.
|
Statistics Definitions > Internal Consistency Reliability
What is Internal Consistency Reliability?
Internal consistency reliability is a way to gauge how well a test or survey is actually measuring what you want it to measure.
A simple example: you want to find out how satisfied your customers are with the level of customer service they receive at your call center. You send out a survey with three questions designed to measure overall satisfaction. Choices for each question are: Strongly agree/Agree/Neutral/Disagree/Strongly disagree.
- I was satisfied with my experience.
- I will probably recommend your company to others.
- If I write an online review, it would be positive.
If the survey has good internal consistency, respondents should answer the same for each question, i.e. three “agrees” or three “strongly disagrees.” If different answers are given, this is a sign that your questions are poorly worded and are not reliably measuring customer satisfaction. Most researchers prefer to include at least two questions that measure the same thing (the above survey has three).
Another example: you give students a math test for number sense and logic. High internal consistency would tell you that the test is measuring those constructs well. Low internal consistency means that your math test is testing something else (like arithmetic skills) instead of, or in addition to, number sense and logic.
Testing for Internal Consistency
In order to test for internal consistency, you should send out the surveys at the same time. Sending the surveys out over different periods of time, while testing, could introduce confounding variables.
An informal way to test for internal consistency is just to compare the answers to see if they all agree with each other. In real life, you will likely get a wide variety of answers, making it difficult to see if internal consistency is good or not. A wide variety of statistical tests are available for internal consistency; one of the most widely used is Cronbach’s Alpha.
- Average inter-item correlation finds the average of all correlations between pairs of questions.
- Split Half Reliability: all items that measure the same thing are randomly split into two. The two halves of the test are given to a group of people and find the correlation between the two. The split-half reliability is the correlation between the two sets of scores.
- Kuder-Richardson 20: the higher the Kuder-Richardson score (from 0 to 1), the stronger the relationship between test items. A Score of at least 70 is considered good reliability.
Next: Cronbach’s Alpha
How do I interpret my test results? April 2010. Retrievd 2/26/2016 from http://academicdepartments.musc.edu/appletree/brown_bag/brown_bag_files/2010/lancaster_appletree_4_10.pdf
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
Comments? Need to post a correction? Please post a comment on our Facebook page.
|
Volcanic lightning is a visually incredible, naturally occurring phenomenon that has been witnessed and documented in nearly 200 eruptions over the last 200 years. The most recent images of volcanic lightning that occurred at Eyjafjallajokull have generated a lot of interest worldwide and allowed people to witness volcanic lightning for the first time in real time and high definition.
Eyjafjallajokull eruption april 17 2010 photo: Marco Fulle
How can a volcano create lightning? Why is volcanic lightning often contained within or in close proximity to the ash plume? What types of eruptions are most conducive for the creation of volcanic lightning? These are all good questions, and in order to answer them we must first look at the physics that makes it all possible.
In order for lightning to form there is one key component; a large charge separation between two masses. If the charge separation becomes big enough it is then able to overpower the air resistance, create a path of ionized air, and conduct electricity in the form of lightning. The ash that is to be erupted begins as electrostatically neutral rock or rock fragments. Heat and movement within the volcano is thought to be the first source of particle charging, although the main process by which ash particles acquire a charge is friction. When an object (in this case ash) with a neutral charge comes in contact with another object with differring elctrostatic qualities, electrons can potentially flow and one of the objects can become charged relative to the other. Think of skidding your socked feet rapidly across the carpet or rubbing a ballon quickly against your head. The same type of charge is accumulating within the ash cloud, only on a much larger scale.
L. Weirup 2010
L. Weirup 2010
Eruption of Chilean volcano Chaitin May 6, 2008 Photo: Gutterriez C.
The lightning itself may come in many shapes and forms including St. Elmo's fire(ball lightning), bolt lightning, sheet lightning, or a combination as was the case during the eruption of Mt. St. Helens in 1980 when several people witnessed long lasting shows of sheet lightning which was accompanied by volkswagen sized St. Elmo's fire bouncing and rolling on the ground nearly 25 miles from the volcano.
Throughout the past 2000 years humans have witnessed lightning bolts flashing in and around erupted ash plumes.
The earliest known written account of volcanic lightning was from Pliny the Elder, an ill fated resident of Pompeii, he wrote; "there was a most intense darkness rendered more appalling by the fitful gleam of torches at intervals obscured by the transient blaze of lightning."
Vesuvius would also be the sight of of the first volcanic lightning studies conducted by Mr. Palmieri who manned the Vesuvius observatory during the eruptions of 1858,1861, 1868, and 1872. Mt Vesuvius eruptions have been relatively frequent and have always included an array of lightning activity within their plumes and ash clouds.
The most well documented volcanic lightning shows ocurred in the eruptions of 1707, 1872, 1906, and 1944. The 1944 eruption is among the earliest to have its volcanic lightning captured in photographs.
Official Airforce photograph of 1944
Images of Volcanic Lightning. Mouse over individual pictures for image credits and location.
|
Whatever the combination of human and natural causes, the world has warmed by about 0.85 degrees Celsius (1.5 degrees Fahrenheit) over the last century or so.
During this time, there have been no increases in tornadoes, hurricanes, floods or droughts associated with this mild increase in world temperature. But much has been made about a dramatic increase in extreme heat and heat waves.
We should expect overall warming to contribute to some increase in the hottest temperatures, but the reporting often implies a dramatic increase in extreme heat. For instance a much-cited study asserts that 75 percent of heat extremes (defined to mean high temperatures that happen on average once every 1,000 days) are caused by the 0.85 degrees of global warming. Does that mean there would be 75 percent fewer extremely hot days if we hadn’t had the 0.85 degrees of global warming? No.
There are many more days when temperatures rise to within one degree of a day defined as “extremely hot” than there are days that actually meet the definition. So, adding a degree across the board will increase the number of days that can be considered “extremely hot” by hundreds of percent. This doesn’t mean we have more heat waves; it just means every day is a little hotter.
We can use a set of temperatures from the National Climatic Data Center to illustrate the misleading assertion of dramatic increases in extreme heat. The daily high temperature at Reagan National Airport for the roughly 5,600 days between Jan. 1, 2000, and April 30, 2015, had three days with temperatures exceeding 39 degrees C (a little over 102 degrees F). Using 39C as our threshold for extreme heat and adding 0.85 degrees to all the daily high temperatures (matching the amount of warming since the pre-industrial age) we would find 10 additional days that exceed the threshold for extreme heat.
So the number of days that can be considered extremely hot increases by more than 300 percent. But does that mean extreme heat waves tripled or more? No. All that has happened is each day is 0.85 degrees C warmer, and the 10 days in our record with temperatures between 38.15 degrees and 39 degrees are now above the threshold.
The same game could be played at the cold end of the spectrum. A temperature increase of 0.85 degrees C means 75 percent fewer “killer” cold days. Who wouldn’t favor that?
This piece originally appeared in The Daily Signal
|
Humans and fish (unless you mean a lungfish) don’t really have much in common, except that they are both vertebrates, and all vertebrates have a common ancestor. But how do you go hundreds of millions of years back in time without a fossil of that ancestor?
Technology from the future has now given us a glimpse into the deep past. Biologists from the University of Colorado Boulder have now used CRISPR to genetically reverse-engineer the embryo of a sea lamprey (those freaky fish that stick to their prey and suck its guts out), making it devolve. The wormlike creature they created proved that removing the set of genes that makes vertebrates what they are rewinds evolution. It can also give us a better understanding of the ancestor we have in common with fish and everything else that has a skeleton.
"There is a single gene vaguely similar, and probably distantly related, to the Endothelin receptors in the genome of the invertebrate chordate amphioxus," biologist Daniel Medeiros, who co-authored a study with David Jandzik and lead author Tyler Square, told SYFY WIRE. "What it does is unknown (we have tried to figure out what it does, but have failed so far). So the endothelin receptor likely evolved from some ancient cell surface receptor, perhaps by gaining some new protein coding sequence, or by exon shuffling (being accidentally combined with some other protein-coding sequence from another part of the genome)."
It’s kind of like that spell of Ursula’s in The Little Mermaid that turned poor unfortunate merpeople into primitive worm-things, just not so grim.
500 million years ago, vertebrates somehow evolved the group of genes that made them vertebrates. These genes make up the Endothelin (Edn) signaling pathway, which switches on specialized cells that develop into parts of the skeleton, the peripheral nervous system and pigment cells to multiply as the embryo develops. These cells are neural crest cells (NCCs). What Square and his team wanted to test was whether taking away the Endothelin signaling pathway would turn a vertebrate into an invertebrate that could be eerily similar to something that existed before skeletons were a thing.
Sea lampreys were used in the experiment because they evolutionarily diverged from other fish around the same time that vertebrates evolved the Endothelin signaling pathway. These jawless fish are living fossils, with ancient vertebrate features that at least give some idea of an early phase of vertebrate evolution.
"The evolution of what we think are two different endothelin signaling pathways allowed neural crest cells to divide themselves up into different groups capable of doing different things," Medeiros said. "We think, based on the lamprey mutant phenotypes, that this facilitated the evolution of different types of vertebrates with different head skeleton features."
Endothelin signals are zapped to different cells in order to tell them what functions to carry out—this is intercellular signaling. Ligands, or molecules that bind to other (often larger) molecules for a specific biological function, are released by signaling cells in this process. The ligands produce a chemical signal when they bind with a protein they target, which is the receptor of that signal. Ligands typically bind only to one particular receptor. Multiple ligands and receptors dedicated to varying functions are involved in Endothelin signaling.
"The endothelin ligands really appeared out of nowhere; there is nothing remotely like them in any invertebrate," Medeiros said. "They must have been transcribed randomly from some non protein-coding DNA on accident at some point. This has been shown to happen in other animals, like fruit flies. Since they are not very large genes, I think that is a good possibility."
In an earlier study, Medieros and his team had analyzed the Endothelin signals in a frog and compared them to those in the lamprey, which they found has specific ligand and receptor pairs that are almost like similar pairs found in jawed vertebrates. This analysis formed the basis of what would be their work with CRISPR.
Genome duplication was previously thought to be behind the evolution of new traits, since copies of genes that already exist can assume new — and possibly, such as in the emergence of vertebrates, unprecedented — functions.
"You sequence the gene you targeted to make sure that you efficiently “broke” the gene, and can then analyze the defects caused by missing the gene," said Medeiros. "The CRISPR method uses the 'programmable' Cas9 enzyme, which cuts DNA. We can injected a solution with the Cas9 enzyme and a piece of RNA into the cell with a tiny glass syringe. In the cell, the Cas9 grabs the RNA guide, then moves into the nucleus and cuts the DNA where we want it, then you let the mutated embryo develop."
Mutant sea lamprey larvae showed just about none of the traits that distinguish vertebrates. What makes this experiment such a breakthrough is that it has been notoriously difficult to find the exact roles for genes that exist only in vertebrates. The team also realized that while gene duplication is definitely involved in evolution, it was not the holy grail that could give rise to an entirely new group of genes, such as NCCs, on its own. Formation of new genes has to be going on at the same time as duplications in order for that to happen.
Could you devolve a human like this? Probably not, but it’s awesome sci-fi-horror movie fodder. What might eventually be done, if you ask Medeiros, is the cancelling out of detrimental genes that could lead to defects.
"As long was we survive as a technological species, we will eventually understand genomes, how they have changed during evolution to make new organisms, and also how it they are disrupted in genetically based diseases and cancer," he said. "At that point we can rewrite the instructions in an intelligent way to create essentially whatever biological outcome we want."
|
Remarkable nanowires could let computers of the future grow their own chips
If you’re worried that artificial intelligence will take over the world now that computers are powerful enough to outsmart humans at incredibly complex games, then you’re not going to like the idea that someday computers will be able to simply build their own chips without any help from humans. That’s not the case just yet, but researchers did come up with a way to grow metal wires at a molecular level.
At the same time, this is a remarkable innovation that paves the way for a future where computers are able to create high-end chip solutions just as a plant would grow leaves, rather than having humans develop computer chips using complicated nanoengineering techniques.
Researchers from IBM’s T.J. Watson Researcher Center are working to create wires that would simply assemble themselves in chips. The scientists use a flat substrate loaded with particles that encourage growth, and then add the materials they wish to grow the wire from.
Researchers used gold to drive the reaction, surrounding it with trimethylgallium and arsine gasses. The result was a gallium arsenide wire that took just a few hours to grow. The researchers also already have the ability to modify the structure to create different layers on top of each other along the length of the wire.
Using this remarkable process, researchers think they can ultimately create nanowires that have the necessary electrical properties that would allow them to form into transistors, which could be the computer chips of the future.
However, this procedure still requires human supervision for the time being, and computers can’t just upgrade themselves by building the chips they require out of thin air – at least not yet.
More details about this fascinating study are available in Nature.
|
Blue whales are big—the biggest animals ever to live on Earth. And it takes some strategic foraging strategies to maintain that enormous body size.
Whales can’t expend too much energy hunting but do have to hold their breath for long periods, which manages to consume energy at an increased rate. This means they have to focus their efforts on food sources that get the most “bang for the buck,” so to speak.
Blue whale feeding strategy targets highest-quality prey, maximizing energy gain
As the largest animals to have ever lived on Earth, blue whales maintain their enormous body size through efficient foraging strategies that optimize the energy they gain from the krill they eat, while also conserving oxygen when diving and holding their breath, a new study has found.
Large, filter-feeding whales have long been thought of as indiscriminate grazers that gradually consume copious amounts of tiny krill throughout the day—regardless of how prey is distributed in the ocean. But tagged blue whales in the new study revealed sophisticated foraging behavior that targets the densest, highest-quality pretty, maximizing their energy gain.
Understanding blue whale feeding behavior will help inform protections for the endangered species and its recovery needs, the scientists say. The study, by researchers from NOAA Fisheries, Oregon State University and Stanford University, was published this week in Science Advances.
“For blue whales, one of our main questions has been: How do they eat efficiently to support that massive body size,” said Elliott Hazen, a research ecologist with NOAA Fisheries’ Southwest Fisheries Science Center and lead author of the research. “Now we know that optimizing their feeding behavior is another specialization that makes the most of the food available.”
Adult blue whales can grow to the length of a basketball court and weigh as much as 25 large elephants combined, but they operate on an “energetic knife-edge,” the researchers point out. They feed through the extreme mechanism of engulfing as much prey-laden water as they weigh and then filtering out the tiny krill it contains.
But feeding expends tremendous amounts of energy and the dense krill patches they need to replenish that energy are often deep and difficult to find.
In their study, the researchers compared the foraging of 14 tagged blue whales to 41 previously tagged blue whales off the coast of California, combining the data with acoustic surveys that measured the density of their sole prey, krill – tiny (less than one inch) crustaceans found throughout the world’s oceans.
The researchers found that when the krill were spread out, or less dense, blue whales fed infrequently to conserve their oxygen and energy use for future dives. When krill density increased, they began “lunge-feeding” more frequently, consuming more per dive to obtain as much energy from the krill as possible.
“Blue whales don’t live in a world of excess and the decisions these animals make are critical to their survival,” said Ari Friedlaender, a principal investigator with the Marine Mammal Institute at Oregon State University’s Hatfield Marine Science Center and co-author on the study. “If you stick your hand into a full bag of pretzels, you’re likely to grab more than if you put your hand into a bag that only had a few pretzels.”
The feeding pattern that focuses more effort on the densest krill patches provides a new example of blue whale foraging specializations that support the animals’ tremendous size.
This kind of lunge-feeding takes a lot more effort, but “the increase in the amount of energy they get from increased krill consumption more than makes up for it,” noted Jeremy Goldbogen, a marine biologist from Stanford University and co-author on the study.
“Lunge-feeding is a unique form of ‘ram-feeding’ that involves acceleration to high speed and the engulfment of large volumes of prey-laden water, which they filter,” Goldbogen noted. “But we now know they don’t take in that water indiscriminately. They have a strategy that aims to focus feeding effort on the densest, highest-quality krill patches.”
In their study, the researchers found a threshold for krill that determined how intensively the blue whales fed.
“The magic number for krill seems to be about 100 to 200 individuals in a cubic meter of water,” Hazen said. “If it’s below that range, blue whales use a strategy to conserve oxygen and feed less frequently. If it’s above that, they’ll feed at very high rates and invest more effort.”
The researchers say this insight into blue whale feeding will help determine how best to protect the species, which is listed as endangered by the International Union for Conservation of Nature.
“If they are disturbed during the intense, deep-water feeding, it may not have consequences today, or this week, but it could over a period of months,” Friedlaender said. “There can be impacts on their overall health, as well as on their fitness and viability for reproduction.”
The study was funded by the US Office of Naval Research.
|
NASA resurrects its most powerful rocket engine after 40 years, for science!
Share This article
At the Marshall Space Flight Center in Alabama, a team of young NASA engineers are disassembling, examining, reassembling, and firing the F-1 — the most powerful rocket engine ever built by the United States.
The F-1 was originally used by the Saturn V — the rocket, constructed by Boeing, North American Aviation, and Douglas, used by NASA’s Apollo and Skylab programs. With 1.5 million pounds (6.7 MN) of thrust, the F-1, built way back in the 50s, remains the most powerful single-chamber rocket engine ever created. With five F-1 engines, Saturn V, which first launched in 1967, is still the largest and most powerful rocket ever created. Each F-1 engine burned 3,357 gallons (12,710 liters) of propellant every second. (Skip to the four- and nine-minute marks in the video below for some idea of what these engines look like in practice.)
The NASA engineers are disassembling an F-1 engine for the simple reason that they want to learn more about it. According to NASA, these engineers weren’t even born when the F-1 engine last flew. The hope is that by analyzing the F-1, NASA will be in a better position to design the engines that will be used by the Space Launch System. The SLS replaces the Space Transport System (the Space Shuttle), and will eventually take humans beyond low-Earth orbit (something that hasn’t been done since 1972).
So far, the team of engineers has analyzed the gas generator from an F-1 engine stored at Marshall, and one that was being stored at the Smithsonian museum. The gas generator is a “small” (55,000 horsepower) engine that drives the rocket’s main turbine, which is tasked with pumping almost three tons of propellant into thrust chamber. They cleaned the parts up, and then used a structured light 3D scanner to create 3D CAD drawings that could be further analyzed on a computer. Structured light is the same method that Kinect uses to work out the dimensions of its environment. This process allowed the engineers to identify which parts of the rocket engine could be recreated or enhanced using 3D printing.
The engineers then rigged the generator up with some modern instrumentation, and fired it up to gain yet more information about its inner workings (video above). “Modern instrumentation, testing and analysis improvements learned over 40 years, and digital scanning and imagery techniques are allowing us to obtain baseline data on performance and combustion stability,” says NASA’s Nick Case. “We are even gathering data not collected when the engine was tested originally in the 1960s.”
Moving forward, an entire gas generator will be built with 3D printing, and then an entire F-1 engine — with the new gas generator — will be built and tested.
|
The British Conquest of Acadia took place in 1710. Over the next 45 years the Acadians refused to sign an unconditional oath of allegiance to Britain. During this time period Acadians participated in various militia operations against the British, such as the raids on Dartmouth, Nova Scotia. The Acadians also maintained vital supply lines to the French Fortress of Louisbourg and Fort Beauséjour. During the French and Indian War (the North American theatre of the Seven Years' War), the British sought both to neutralize any military threat the Acadians posed, and to interrupt the vital supply lines they provided to Louisbourg, by deporting them from Acadia.
Prior to the expulsion, the British retrieved the Acadians' weapons and boats in the Bay of Fundy region and arrested their deputies and priests.
|
Unlocking the Secrets of Uranium Dioxide’s Thermal Conductivity
To better understand the thermal conductivity of uranium dioxide fuel—a material used in the generation of electricity via nuclear fission—Los Alamos National Laboratory is helping the DOE develop advanced predictive computer models of nuclear reactor performance.
Recent Los Alamos research shows that the thermal conductivity of cubic uranium dioxide is strongly affected by interactions between phonons carrying heat and the electron's magnetic spins. Because uranium dioxide is a cubic compound and thermal conductivity is a second-rank tensor (tensors are mathematical objects that can be used to describe physical properties, just like scalars and vectors), the material has always been assumed to be isotropic.
However, when it comes to thermal conductivity, the fuel exhibits unexpected behavior. For example, in single crystals the measured thermal conductivity is different along the side or edge of the cubic unit cell than along the diagonal, even at and above room temperature.
This is critical for the nuclear power industry because some of the uncertainty in historical experimental data may have been due to hidden anisotropy rather than measurement uncertainties or sample impurities, as previously thought. The anisotropy is explained by phonon-spin scattering.
|
Dissociative Identity Disorder (Multiple Personality Disorder)
What is dissociative identity disorder?
Dissociative identity disorder (DID), formerly called multiple personality disorder, is one of a group of conditions called dissociative disorders. Dissociative disorders are mental illnesses that involve disruptions or breakdowns of memory, awareness, identity and/or perception. When one or more of these functions is disrupted, symptoms can result. These symptoms can interfere with a person’s general functioning, including social activities, work functions, and relationships. People with DID often have issues with their identities and senses of personal history.
Dissociation is a key feature of dissociative disorders. Dissociation is a coping mechanism that a person uses to disconnect from a stressful or traumatic situation or to separate traumatic memories from normal awareness. It is a way for a person to break the connection between the self and the outside world, as well as to distance oneself from the awareness of what is occurring. Dissociation can serve as a defense mechanism against the physical and emotional pain of a traumatic or stressful experience. By dissociating painful memories from everyday thought processes, a person can use dissociation to maintain a relatively healthy level of functioning, as though the trauma had not occurred.
Dissociation can be described as a temporary mental escape (similar to self-hypnosis) from the fear and pain of the trauma. Even after the trauma is long past, however, the leftover pattern of dissociation to escape stressful situations continues. When dissociation is done repeatedly—as in the case of prolonged abuse—these dissociated mental states can take on separate identities of their own.
A person with DID, the most severe type of dissociative disorder, has two or more different personality states—sometimes referred to as "alters" (short for alternate personality states)—each of whom takes control over the person’s behavior at some time. Each alter might have distinct traits, personal history, and way of thinking about and relating to his or her surroundings. An alter might even be of a different gender, have his or her own name, and have distinct mannerisms or preferences. The person with DID may or may not be aware of the other personality states and might not have memories of the times when another alter is dominant. Stress or a reminder of the trauma can act as a trigger to bring about a "switch" of alters. This can create a chaotic life and cause problems in work and social situations.
What causes DID?
It is generally accepted that DID results from extreme and repeated trauma that occurs during important periods of development during childhood. The trauma often involves severe emotional, physical or sexual abuse, but also might be linked to a natural disaster or war. An important early loss, such as the loss of a parent, also might be a factor in the development of DID. In order to survive extreme stress, the person separates the thoughts, feelings and memories associated with traumatic experiences from their usual level of conscious awareness.
The fact that DID seems to run in families also suggests that there might be an inherited tendency to dissociate. DID appears to be more common in women than in men. This might be due to the higher rate of sexual abuse in females.
What are the symptoms of DID?
Symptoms of DID are similar to those of several other physical and mental disorders, including substance abuse, seizure disorder and post-traumatic stress disorder. Symptoms of DID can include the following:
- Changing levels of functioning, from highly effective to nearly disabled
- Severe headaches or pain in other parts of the body
- Depersonalization (episodes of feeling disconnected or detached from one’s body and thoughts)
- Derealization (perceiving the external environment as unreal)
- Depression or mood swings
- Unexplained changes in eating and sleeping patterns
- Anxiety, nervousness, or panic attacks
- Problems functioning sexually
- Suicide attempts or self-injury
- Substance abuse
- Amnesia (memory loss) or a sense of "lost time"
- Hallucinations (sensory experiences that are not real, such as hearing voices)
A person with DID might repeatedly meet people who seem to know him or her, but whom he or she does not recognize. The personal also might find items that he or she does not remember buying.
How is DID diagnosed?
If symptoms are present, the doctor will begin an evaluation by performing a complete medical history and physical examination. While there are no laboratory tests to specifically diagnose dissociative disorders, the doctor might use various diagnostic tests—such as X-rays and blood tests—to rule out physical illness or medication side effects as the cause of the symptoms. Certain conditions—including brain diseases, head injuries, drug and alcohol intoxication, and sleep deprivation—can lead to symptoms similar to those of dissociative disorders, including amnesia. In fact, it is amnesia or a sense of lost time that most often prompts a person with DID to seek treatment. He or she might otherwise be totally unaware of the disorder.
If no physical illness is found, the person might be referred to a psychiatrist or psychologist, health care professionals who are specially trained to diagnose and treat mental illnesses. Psychiatrists and psychologists use specially designed interview and personality assessment tools to evaluate a person for a dissociative disorder.
How is DID treated?
The goals of treatment for DID are to relieve symptoms, to ensure the safety of the individual, and to "reconnect" the different identities into one well-functioning identity. Treatment also aims to help the person safely express and process painful memories, develop new coping and life skills, restore functioning, and improve relationships. The best treatment approach depends on the individual and the severity of his or her symptoms. Treatment is likely to include some combination of the following methods:
This kind of therapy for mental and emotional disorders uses psychological techniques designed to encourage communication of conflicts and insight into problems.
This type of therapy focuses on changing dysfunctional thinking patterns.
There is no medication to treat the dissociative disorders themselves. However, a person with a dissociative disorder who also suffers from depression or anxiety might benefit from treatment with a medication such as an antidepressant or anti-anxiety medicine.
This kind of therapy helps to educate the family about the disorder and its causes, as well as to help family members recognize symptoms of a recurrence.
Creative therapies (art therapy, music therapy)
These therapies allow the patient to explore and express his or her thoughts and feelings in a safe and creative way.
This is a treatment technique that uses intense relaxation, concentration and focused attention to achieve an altered state of consciousness or awareness, allowing people to explore thoughts, feelings and memories they might have hidden from their conscious minds.
What are the complications of DID?
DID is serious and chronic (ongoing), and can lead to problems with functioning and even disability. People with DID also are at risk for the following:
- Suicide attempts
- Substance abuse
- Repeated victimization by others
What is the outlook for people with DID?
People with DID generally respond well to treatment; however, treatment can be a long and painstaking process. Some people with DID are reluctant to reconnect their separate identities because these different identities help them to cope. To improve a person’s outlook, it is important to treat any other problems or complications, such as depression, anxiety or substance abuse.
Can DID be prevented?
Although it may not be possible to prevent DID, it might be helpful to begin treatment in people as soon as they begin to have symptoms. In addition, an immediate intervention following a traumatic event can help reduce the risk of a person’s developing dissociative disorders.
- National Alliance on Mental Illness. Information Helpline. Dissociative Identity Disorder. nami.org Accessed 4/18/2012
- Nurcombe B. Chapter 24. Dissociative Disorders. In: Ebert MH, Loosen PT, Nurcombe B, Leckman JF, eds. CURRENT Diagnosis & Treatment: Psychiatry. 2nd ed. New York: McGraw-Hill; 2008. www.accessmedicine.com Accessed April 18, 2012
- International Society for the Study of Trauma and Dissociative Disorders. Frequently Asked Questions: Dissociation and Dissociative Disorders. www.isst-d.org Accessed 4/18/2012
- Mental Health America. Dissociation and Dissociative Disorders. www.mentalhealthamerica.net Accessed 4/18/2012
- American Psychological Association www.apa.org Accessed 4/18/2012
© Copyright 1995-2012 The Cleveland Clinic Foundation. All rights reserved.
This information is provided by the Cleveland Clinic and is not intended to replace the medical advice of your doctor or health care provider. Please consult your health care provider for advice about a specific medical condition. This document was last reviewed on: 3/29/12...#9792
|
Epilepsy is not a single disorder but a group of disorders that are characterized by recurring attacks. This is typically caused by abnormal electrical discharges from brain cells in the cerebral cortex. Types of epileptic seizures that occur with epilepsy vary depending on whether they are idiopathic or secondary. Idiopathic ones occur without any apparent cause, while secondary ones are due to certain neurological disorders and brain abnormalities. Let’s look at those associated with epilepsy.
First of all, there are many types of epileptic seizures. So, understanding the nuances of this condition requires that one know about the many kinds involved. After all, not all types of epilepsy are the same; some are far more severe than others, requiring innovative treatment methods to help remedy the condition. Second of all, epilepsy is quite common – probably more common than you think. Thus, knowing about the different types of epileptic seizures and how each type leads to different epilepsy symptoms and treatment methods is very important.
A Wide Range of Different Types of Seizures
Now, the truth is, with so many different types of seizures, there are also many classifications in use to categorize various forms of epilepsy. In modern medicine, there are as many as 40 different types of epileptic seizures. Also, there is a specific detailed structure for each and every condition. The most useful of these classifications is the one that deals with the underlying pathophysiology or anatomy of seizures. Diagnosis is often carried out using an EEG, which helps determine the origin of the seizures in the patient’s brain.
Simple Partial Seizures
These types of epileptic seizures are partial seizures because they do not often lead to the breakdown of all motor and sensory functions. Simple partial seizures come with a small subset of symptoms which impair either the motor, sensory, autonomic, or psychic functions but not all at once. Essentially, they do not lose consciousness in the midst of a seizure episode.
Complex Partial Seizures
By extension, these seizures impair the patient’s consciousness and are often psychomotor in nature. Psychomotor seizures mean that the patient seizes by having involuntary muscle contractions while also mentally blacking out at the same time.
Partial Seizures Evolving to Secondarily Generalized Seizures
These are the types of epileptic seizures that originate in a very particular place in the brain but gradually progresses to include the whole brain; hence the term “generalized.”
These are episodes during which a patient loses consciousness. However, absence seizures but do not necessarily lead to seizing in a conventional sense. People with absence seizures often stare blankly into space or daydream but do not show any other physical symptoms other than a lack of responsiveness.
Clonic and Myoclonic Seizures
These are the types of epileptic seizures that involve sudden jerky movements, but they happen so momentarily that the patient often does not even notice they are experiencing a seizure. Clonic and Myoclonic seizures often affect just a small portion of the body, usually an arm or a leg. However, they are usually vaguely noticeable because the person will continue to remain conscious despite the involuntary movements.
This is a typical episode where a patient falls to the ground and exhibits the typical visual cues that indicate a full-blown seizure. The most common indication includes tongue-biting or a loss of motor control, resulting in urination or involuntary bowel movement.
With so many different types of epileptic seizures, it is best to leave it to the experts to classify the condition and then recommend the appropriate treatment. What one can do is take note of their symptoms and report them accordingly so the doctor can take everything into consideration for a definitive and accurate diagnosis.
There are distinct epilepsy symptoms of which you should be aware. Epileptic Seizures can cause involuntary changes in body movements or functioning, as well as unusual sensations or behaviors. An epileptic seizure can last a few seconds, but it can also last so long that the patient will need some sort of intervention.
Its treatments entail turning the person on his or her side until after the convulsion ends. Doing this can help drain any fluids or secretions such as vomit from the individual’s mouth. Never attempt to hold down or restrain a person during an attack. Most types of seizures are self-limiting and stop on their own after a period of time. However, an individual suffering from a seizure may injure themselves if they inhale food, fluids, or their own vomit. Even difficulty breathing is a possibility, so staying close by to help is imperative.
For the vast majority, there is one symptom that typically follows having a seizure: an overwhelming feeling of fatigue. Some who experience grand mal seizures or even partial ones will want to lie down and rest afterward.
Epilepsy is a detrimental medical condition that requires much care and additional precautionary steps. For example, you should always wear an alert bracelet to warn others that you have epilepsy in the case of an emergency. Moreover, all emergency medical care providers, dentists, and doctors must be aware you are on anticonvulsants.
Featured Image Source: Thinkstock/Zerbor
Sourced from: healthline.com
|
Heating and Cooling for Needs and Wants
Lesson 3 of 5
Objective: Students will be able to identify how heating or cooling technology of food products influences individual, family, or community decisions.
The lesson will begin with students using their background knowledge to share their ideas on how heating and cooling technology has changed over time. We will discuss wood burning fireplaces and fans, to the current developments in heating and cooling technology such as microwaves and air conditioning units.
I will lead a whole group discussion about why people may have wanted or needed to change heating and cooling technologies. I will allow students the opportunity to share their ideas.
I will begin the exploration section of the lesson by asking the students, "Why do people want or need to heat or cool food?" I will provide the opportunity for students to share their answers.
Next, I will set up a carousel brainstorm around the room. To do this, I will post six sheets of blank chart paper around the room. Students will be divided into groups and each group will be given a marker with a question to tape on a piece of chart paper and answer. The questions the students will be responding to are:
How has heating food over time changed?
What technology (tools/techniques) have been developed for heating food?
What technology (tools/techniques) does your family use most often to heat food?
How has cooling food changed over time?
What technology (tools/techniques) have been developed for cooling food?
What technology (tools/techniques) does your family use most often to cool food?
When students have completed their charts with responses, students will rotate around the room to the other charts to view their responses and add any information that believe is missing.
The purpose of students responding to their questions on chart paper and rotating around the room, is to allow the students to share their ideas with others as well as to gain new ideas.
Once complete, I will engage students in exploring resources that describe how food products are changed by heating and cooling. Students will explore What is Parboiling, What is Microwave Cooking, Who Invented the Refrigerator, and Who Invented Frozen Food.
After we review the resources together, I will ask students to share their ideas on how the heating and cooling technologies we have learned about influenced individual, family, or community decisions about how food products are purchased or prepared. We will also discuss how selecting food processing methods influence people's choices about the food products they purchase and prepare to eat.
|
Aggression has a long history in both sport and nonsport contexts. There is some variation in the definitions of aggression employed by different people. However, it is commonly agreed that aggression is a verbal or physical behavior that is directed intentionally toward another individual and has the potential to cause psychological or physical harm. In addition, the target of the behavior should be motivated to avoid such treatment. Typically, definitions of aggression incorporate the notion of intent to cause harm; that is, for behavior to be classified as aggressive, the perpetrator must have the intent to harm the victim. However, strict behavioral definitions of aggression exclude the term intent because it refers to an internal state, which cannot be observed.
Aggression has been distinguished between instrumental and hostile. Instrumental aggression is a behavior directed at the target as a means to an end, for example, injuring a player to gain a competitive advantage, or late tackling to stop an opponent from scoring. Thus, instrumental aggression is motivated by some other goal. In contrast, hostile aggression is a behavior aimed toward another person who has angered or provoked the individual and is an end in itself. Its purpose is to harm for its own sake, for example, hitting an opponent who has just been aggressive against the player. Hostile aggression is typically preceded by anger. Instrumental aggression, in pursuit of a goal, is not normally associated with anger and, in sport, is far more frequent than hostile aggression. In both types of aggression, a target person is harmed, and the harm can be physical or psychological.
In this entry, the construct of aggression is presented. First, the distinction is made between aggression and assertion, and difficulties with the notion of intent in the definition of aggression are discussed. Then measures of aggression are outlined followed by factors associated with aggression in sport.
Aggression, Assertion, and Intent
In sport, the word aggressive is often used when assertive is more appropriate. For example, coaches describe strong physical play as aggressive, when this type of play is actually assertive; it is within the rules of the game and there is no intention to cause harm. The difference between aggression and assertion lies in the intention to harm. If there is no intent to harm the opponent, and the athlete is using legitimate means to achieve goals, the behavior is assertive, not aggressive. When one is being assertive, the intention is to establish dominance rather than to harm the opponent. Behaviors such as tackling in rugby, checking in ice hockey, and breaking up a double play in baseball may be seen as assertive as long as these are performed as legal components of the contest and without malice. However, these same actions would represent aggression if the athlete’s intention was to cause injury.
It is often difficult to distinguish aggression from assertion in sport. Although assertive behaviors are forceful behaviors that are not intended to injure the victim, by their nature, they may result in unintended harm to the athlete’s opponent. In addition, some sports involve forceful physical contact, which has the capacity to harm another person, but this contact is within the rules of sport. Assertive behaviors have also been labeled sanctioned aggression. Thus, sanctioned aggression is any behavior that falls within a particular sport’s rules or is widely accepted as such: for example, using the shoulder to force a player off the ball in soccer and tackling below the shoulders in rugby. Examples are combat sports, such as judo, karate, and wrestling, and team contact sports, such as rugby, ice hockey, American football, and lacrosse. Perhaps the confusion between assertion and aggression arises because both have the capacity to harm the target, although, as noted earlier, only aggression involves intention to harm.
Incorporating the notion of intent in definitions of aggression has the difficulty of establishing which behavior is aggressive. This is because the only person knowing whether there is intent to cause harm is the person who carries out the action. Two features of definitions of aggression that have not been questioned are the capacity of behavior to cause harm and the intentional (nonaccidental) nature of the behavior.
The Measurement of Aggression
The notion of intent, which is part of most definitions of aggression, has created difficulties in the measurement of aggression. Therefore, many studies have operationally defined and measured aggression without considering intent, or the reasons for the behavior. A very common aggression measure in the laboratory context is administering electric shocks, which is known to hurt the participant. Thus, aggression is reflected in the intensity of the shock administered. Other studies used delivering an aversive stimulus, for example a loud noise, as their measure of aggression.
In the sport context, aggression has been measured in a variety of ways, such as number of fouls, coach ratings, penalty records, as well as using self-reports and behavioral observation. In studies of behavioral observation, instrumental and hostile aggression have been measured. Instrumental aggression has been operationally defined as aggression occurring during game play and involves opponent-directed physical interactions that contribute to accomplishing a task. In contrast, hostile aggression has been operationally defined as physical or verbal interactions aimed at various targets but not directly connected to task accomplishment; these behaviors are directed at opponents, teammates, or referees. For example, in handball, repelling, hitting, and cheating have been coded as instrumental aggression, and insulting, threatening, making obscene gestures, and shoving against opponents, referees, teammates, and others have been coded as hostile aggression. Aggressive behaviors (e.g., late tackle, hitting, elbowing) have also been measured as part of the construct of antisocial behavior, which has been defined as behavior intended to harm or disadvantage another individual and has considerable overlap with aggression.
Other studies have used athlete self-reports to measure aggression, either by presenting them with a scenario that describes an aggressive behavior and asking about their intentions or likelihood to aggress, or by asking them to respond to a number of items measuring aggressive or antisocial behavior. Self-described likelihood to aggress has been used as a proxy for aggression. In these studies, participants are presented with a scenario in which the protagonist is faced with a decision to harm the opponent to prevent scoring and they are asked to indicate the likelihood they would engage in this behavior if they were in this situation. Finally, aggression (e.g., trying to injure another player) has been measured as part of antisocial behavior in sport.
Why Aggression Occurs
Aggression has a long history in both mainstream psychology and sport psychology. One view is that aggression results from frustration. In sport, frustration can occur for a variety of reasons: because of losing, not playing well, being hurt, and perceiving unfairness in the competition. Frustration heightens one’s predisposition toward aggression. Contextual factors come into play so that the manner in which an individual interprets the situational cues at hand best predicts whether this athlete, or spectator, will exhibit aggression.
Some theorists view aggression as a learned behavior, which is the result of an individual’s interactions with personal social environment over time. Aggression occurs in sport where an athlete’s expectancies for reinforcement for aggressive behavior are high (receiving praise from parents, coaches, peers), and where the reward value outweighs punishment value (gaining a tactical or psychological advantage with a personal foul). Situation-related expectancies, such as the time of game, score opposition, or the encouragement of the crowd, also influence the athlete in terms of whether this is deemed an appropriate time to exhibit aggression.
A number of individual difference factors have been associated with aggression. Three of them are legitimacy judgments, moral disengagement, and ego orientation. When athletes judge aggressive and rule-violating behaviors as legitimate or acceptable, they are more likely to be aggressive. Moral disengagement refers to a set of psychosocial mechanisms that people use to justify aggression. Through these justifications, athletes manage to engage in aggression without experiencing negative feelings like guilt that normally control this behavior. For example, players may displace responsibility for their actions to their coach, blame their victim for their own behavior, claim that they cheated to help their team, or downplay the consequences of their actions for others. Finally, individuals who are high in ego orientation feel successful when they do better than others; they are preoccupied with winning and showing that they are the best. These players are more likely to be aggressive in sport.
Social environmental variables are also associated with aggression. One of them is the performance motivational climate, which refers to the criteria of success that are dominant in the athletes’ environment. Through the feedback they provide, the rewards they give, and, in general, the way they interact with the players, coaches make clear the criteria of success in that achievement context. As an example, when coaches provide feedback about how good a player is relative to others and reward only the best players, they create a performance motivational climate, sending a clear message to athletes that only high ability matters. Players who perceive a performance climate in their team are more likely to become aggressive.
Aggression is a construct with a long history and considerable debate around its definition, primarily due to the difficulties of determining whether the perpetrator has the intention to harm the victim when acting in a certain way. Aggression can be instrumental or hostile. Many sports involve forceful play, which could result in an injury. However, if players do not intend to harm the opponent, this play is considered as an assertive act, not an aggressive one. Finally, several individual difference and social environmental factors have been associated with aggression in sport.
- Anderson, C. A., & Bushman, B. J. (2002). Human aggression. Annual Review of Psychology, 53, 27–51.
- Baron, R. A., & Richardson, D. R. (1994). Human aggression (2nd ed.). New York: Plenum Press.
- Coulomb-Cabagno, G., & Rascle, O. (2006). Team sports players’ observed aggression as a function of gender, competitive level, and sport type. Journal of Applied Social Psychology, 36, 1980–2000.
- Kavussanu, M. (2008). Moral behaviour in sport: A critical review of the literature. International Review of Sport and Exercise Psychology, 1, 124–138.
- Stephens, D. (1998). Aggression. In J. L. Duda (Ed.), Advances in sport and exercise psychology measurement (pp. 277–292). Morgantown, WV: Fitness Information Technology.
|
Bell peppers are one of the more popular vegetables grown in backyard gardens. The variety of colors of mature bell peppers, red, green, yellow and orange, appeal to gardeners and consumers alike. Growing bell peppers can present some difficulties, not the least of which is coping with insect problems. Because they are botanically related to tomatoes, potatoes and eggplant, they also share some of the same pest problems.
Types of Bugs
Bell peppers are susceptible to attack from a variety of insects. The most common pests that can infest these plants and their fruits are aphids, thrips, stink bugs, spider mites, cucumber beetles, the European corn borer and pepper maggots. If populations of any of these insects are allowed to increase, the effects on the crop of bell peppers can be devastating.
Aphids dwell on the underside of leaves, where they suck out the plant’s fluids, weakening the plant. The loss of fluids, and ultimately leaves, leads to poor pepper development and sun scald. Thrips also suck fluids from plants but are not limited to leaves and stem. Thrips also feed on fruit, leaving it scarred and often unmarketable. Stink bugs cause a mechanical injury to seeds and inject the yeast-spot disease organism into the seed and surrounding tissue. Spider mites leave bronze discolorations and cause leaf scorch, which can lead to premature leaf drop and death. Cucumber beetles fed on foliage but more importantly can introduce a bacteria that causes bacterial wilt. The larvae of the European corn borer feed on all tissue above ground, destroying foliage and fruit alike. Pepper maggots bore into peppers near the cap and feed on the inside of the fruit. Fruit becomes more susceptible to rotting.
Aphids and spider mites prefer warm weather and become more active during its presence, breeding in as little as seven or eight days. Thrips move to well-watered gardens when the late summer heat dries out nearby tall grasses and weeds. Stink bugs become active at temperatures above 70 degrees F and breed every five weeks, making late summer and early fall the times when their populations can cause the most trouble. Cucumber beetles tend to be a springtime problem and overwinter in garden debris and cracks in nearby trees. Corn borers are attracted to tall grasses and corn-like plants and will migrate to other plants such as peppers if their preferred food supply runs low. Pepper maggots feed only on plants in the pepper family which includes eggplant. They are drawn to this type of plant regardless of environmental conditions.
Unless aphid, thrip and spider mite populations become excessive, no treatment is necessary. If aphid populations become problematic, insecticidal soap may be effective. Row covers and hand removal are useful in managing stink bugs and thrips. Insecticides labeled specifically for the remaining insects should be applied according to the manufacturer’s directions.
Implementing good maintenance practices is the best way to avoid bug problems on bell pepper plants. Remove all debris and damaged or rotting fruit. Weeds should be kept to a minimum. Avoid planting peppers near tall stands of grass or corn. Monitor the plants regularly for signs of insect damage and act quickly to prevent continued infestation. Provide water regularly and mulch to control weeds and retain soil moisture.
|
Craig Kluever’s dream was born as he found himself awestruck in front of a grainy black-and-white television screen watching Apollo 11 land on the moon. He was in kindergarten. As he puts it, “that just made a big impact on me. Of course, the first thing I wanted to be was an astronaut.” Those early dreams of becoming an astronaut turned instead into a pursuit of the science behind the rockets. Today, the MU Professor of Mechanical and Aerospace Engineering works behind the scenes to solve the kind of problems involved in designing space travel—such as how to take off, how to reach a target, and, more importantly, how to return safely to Earth.
Before 1998, all conventional spacecraft missions were powered by chemical propulsion. “This is what we are used to seeing on TV,” says Kluever, for example, referring to Apollo’s missions to the moon or the robotic Mars Exploration Rovers. “You fire a chemical-fuel engine for a very brief period of time, and it provides a lot of thrust, for only a minute or less, and that provides an impulse that sends the spacecraft on to the moon, or Mars, or wherever.” In those cases, 99% of the trajectory is just a coasting flight through the vacuum of space, where gravity is the only force controlling the trajectory. Kluever explains further: “As complicated as that sounds, that’s really a pretty simple problem because all you have to deal with is gravity, and we pretty much know how to model gravity, and so we know how to predict where a spacecraft is going to be.” However simple this technique, he continues, it isn’t very efficient. “If you look at pictures of the Saturn V going to the moon,” he cautions, “you’ll see a tiny capsule at the top and then three stages of fuel. It took that much fuel to get the tiny capsule to the moon.” The rover mission, for instance, that uses chemical propulsion, may only get 10% of the total mass of the rocket to Mars. Electric propulsion is far more efficient, Kluever notes: “theoretically, we could get as much as 60 or 70% of the initial orbital mass to our destination.”
Dr. Kluever first came to this area of research as a graduate student when he had a fellowship with NASA. At the time, he recalls, electric propulsion was brand new technology, and NASA needed predictive computer models to calculate missions, for example to map a trajectory from Earth to Mars. With electric propulsion, “you have an electric engine at the back of the spaceship instead of a chemical rocket. It’s almost like if you shine a flashlight out the back and emit electrons through a magnetic grid; instead of burning fuel and exhausting hot gases out the back, you’re throwing electrons at very high speeds. It’s the momentum exchange that gives you rocket propulsion. So you’ve got this spacecraft already in space, with a continuous thrust of one pound out the back, and it takes a very long time to build up the acceleration needed to go to Mars (or wherever the target may be). It’s a much more challenging problem because you’ve got to figure out how to steer that thrust, how to manage the electrical propulsion system, and then hopefully hit your target. It’s really just much more complicated than the old missions using chemical propulsion.”
The first space mission to test electric propulsion was Deep Space I, launched in 1998. “It had a very modest target,” says Kluever – basically just to fly past an asteroid – “and it was able to complete that mission.” Since then there have been some big plans to send spacecraft to Jupiter (or other outer planets) using electric propulsion. The problem with that plan is not with the technology, Kluever clarifies, but with its funding. “That’s the status of electric propulsion,” he observes. “It’s a very uncertain business right now. These things cycle; sometimes technologies are politically in favor and sometimes not. Right now, electric propulsion is out of favor.”
While this dimension of his research continues, about five years ago Kluever began working with the X-33 program. Once hailed as the next generation space shuttle, this winged vehicle was slated to replace the existing Space Shuttle, although the program has since been canceled. The Space Shuttle works very well, Kluever says, “but it does not have a lot of robustness built in.” That means that if it comes in on a flight path that is too steep, too shallow, or too fast, it will have very limited capabilities for altering that flight path and still making the planned approach for landing. Fortunately the Shuttle hasn’t had any major mechanical failures on the way down, but if those kinds of failures (like a broken rudder) occurred it would have limited maneuverability.”
Hence, he and his team at NASA sought to build robustness into designs for guidance and control systems. “Robust” describes a new guidance system that is more automated and adaptable; a new generation of safer shuttle vehicles that depends on advanced computing power, “so that if some major failure occurred—like the rudders didn’t work and it had limited banking ability, or the elevators didn’t work and it had limited pitching capability—then you could recalculate a trajectory that would still take it to a safe landing.” Kluever’s critical contribution to this project was to figure out how to make the shuttle’s guidance system recognize and steer toward the runway while maintaining the precise amount of energy needed.
Now the hot topic is the Crew Exploration Vehicle, the capsule in which NASA hopes to send astronauts to the moon and to Mars. Kluever is focusing on the atmospheric part of the entry and guidance system of this “Apollo-style” capsule, particularly the Earth return portion of the mission, which will involve a creative maneuver comparable to skipping a rock on the lake. He is also working on the ascent guidance system for the vacuum-flight phase of the Crew Launch Vehicle. “It turns out that the guidance system is very similar to what was used in Apollo,” Kluever explains. “When these crafts come back, there’s no longer any primary propulsion, and they’re just at the mercy of aerodynamics.” In certain scenarios, the astronauts may need to come back to Earth due to a failure, in which case the return trajectory may suddenly not be optimal for the landing site. With this skip maneuver, “the atmosphere slows the vehicle down, with the idea that by doing a skip you can extend the range to almost halfway around the world. So you can essentially enter anywhere in the atmosphere – do an aerodynamic maneuver, skip out of the atmosphere back into space, re-enter again thousands of kilometers down range, and then land hopefully at your target.” It becomes complicated, however, because if the range is very long, then very small errors (in terms of speed or angle) at that skip-out point will result in very large errors down the road. In fact, that first skip could result in a very dangerous trajectory that would be close to escaping the atmosphere, shooting out into space! “If you don’t manage the first skip properly,” Kluever warns, “it could just skip out and never return.” Like his work on robustness, Kluever’s job is to make sure these shuttles return safely to the Earth and, perhaps, inspire another generation of dreamers.
Surprisingly, when I asked him why this research was important, Kluever surprised me. Certainly, he could have extolled the benefits of innovation and discovery involved in space exploration. However, he responded ambivalently with, “That’s the hardest question.” He could cite the many technological advances that were outcomes of the space program (from Teflon and computers to mammograms), advances that impact many lives. But Kluever sees that kind of response as too clichéd. As to whether there’s a direct benefit to sending a person to the moon, “I myself struggle with that question,” he admits. “I don’t want to put myself out of business, and I certainly enjoy working on these problems, but I’ve worked on paper study after paper study, and it’s a tough business to see things through to a mission. I know people at NASA centers who have worked their entire careers and never worked on anything other than a paper study. That’s got to be frustrating, and it makes you think about where the priorities should lie for funding space projects.”
Others in NASA’s Jet Propulsion Laboratory have suggested doing future missions with robotics, which would be a lot cheaper. Kluever elaborates on that idea: “As soon as you put a person in that loop, then all these other things have to be tested and verified, and it becomes so much more expensive, not to mention the added mass (food, water, protection, shelter) to get a person there and back safely. You don’t have to do all that with robotic probes.” Of NASA’s overall budget, which is less than 1% of the national budget, roughly 75% funds the Space Shuttle and the International Space Station, leaving little to support biological, earth science, and robotic missions (to Jupiter, Pluto, and Mercury). “In this day of tight budgets, I’m not sure if that money is justified to send a person to the moon,” says Kluever.
|
Learning how to reason by analogy is one of the most important objectives of legal education. But you certainly don’t have to be a lawyer to use analogy in your thinking. In fact, whenever we encounter a new situation, we start searching for some familiar elements in it to give us an indication what to do. In law school, you have to build awareness of what is going on in your head when you reason by analogy. In other words, you deconstruct the process. Generally, when you reason by analogy, you take the following steps:
- Identify the analogy by recognizing the similarities between objects or situations. Let’s say, you see a tangerine for the first time and you want to compare it to oranges, lemons, and peaches that you are familiar with. We like to think in big categories or archetypes, so the first thing that probably jumps out is that they are all fruits, and the tangerine is just like an orange or a lemon because it is a citrus.
- State the purpose of the analogy. The purpose allows you to determine what characteristics are essential. In the above example, if your purpose is to avoid citruses because of the allergies they may cause, the attribute of being a citrus is essential and your analogy between the tangerine and the orange or lemon is good. Now, let’s say, you want to know how easy it is to peel off the skin of the fruit. If that’s the case the analogy between the tangerine and orange is still good, but the analogy between the tangerine and lemon becomes weak because it’s hard to peel a lemon. If you compare the sweetness of the fruits, the tangerine becomes more like a peach than a lemon.
- Assess the source of your analogy. If there are alternative sources of comparison, how do you choose which to use? Let’s say, in my last example, which focused on the sweetness of the fruits, I could use kiwis as another basis of comparison but I chose not to. What is the significance of my choice? Is there a difference in the perceived sweetness when I say, “Tangerines are just like kiwis,” and when I say, “Tangerines are just like peaches”?
- Evaluate the ambiguities, dissimilarities, false attributions that may weaken or break the analogy. Do the differences between the tangerines and peaches undermine the analogy? What are the underlying assumptions when you make the comparison? In the example above, I assumed that I was comparing ripe fruits, otherwise, the analogy wouldn’t make sense. If you hear the sentence: “This toy is a lemon,” does it mean that the toy is defective or it shares some attributes with the fruit?
To test yourself, do Analogy Exercises by Peter Suber. For some practice in pattern recognition, try Brain Workout for Your Frontal Lobes from SharpBrains.
|
For example, you could say
int examplearray; //This declares an array
This would make an integer array with 100 slots, or
places to store values. The only difficult thing is that it starts off with the
first index-number, that is, the number that you put in the brackets to access a
certain element, is zero, not one!
Think about arrays like this:
Each of the slots is a slot in the array, and you can put
information into each one of them. It is like a group of variables side by side
What can you do with this
simple knowledge? Lets say you want to store a string, since C++ has no built-in
datatype for strings, in DOS, you can make an array of characters.
Will allow you to declare a
char array of 100 elements, or slots. Then you could get it from the user, and
if the user types in a long string, it will all go in the array. The neat thing
is that it is very easy to work with strings in this way, and there is even a
header file called STRING.H. I will have a lesson in the future on the functions
in string.h, but for now, lets concentrate on arrays. The most useful
aspect of arrays is multidimensional arrays.
Think about multidimensional arrays:
This is a graphic of what a two-dimensional array looks
like when I visualize it.
declares an array that has two dimensions. Think of it
as a chessboard. You can easily use this to store information about some kind of
game, or write something like tic-tac-toe. To access it, all you need are two
variables, one that goes in the first slot, one that goes in the slot. You can
even make a three dimensional array, though you probably won't need to. In fact,
you could make a four-hundred dimensional array. It is just is very confusing to
Now, arrays are basically
treated like any other variable. You can modify one value in it by putting:
You will find lots of useful things to do with arrays,
from store information about certain things under one name, to making games like
tic-tac-toe. One little tip I have is that you use for loops to access arrays.
It is easy:
int x, y, anarray;//declares an array like a chessboard
for(x=0; x<8; x++)
for(y=0; y<8; y++)
anarray[x][y]=0;//sets all members to zero once loops is
for(y=0; y<8; y++)
Here you see that the loops work well because they
increment the variable for you, and you only need to increment by one. It is
simple, and you access the entire array, would you want to use while loops?
|
Earth's New Trojan Friend
This artist's concept illustrates the first known Earth Trojan asteroid, discovered by NEOWISE, the asteroid-hunting portion of NASA's WISE mission. The asteroid is shown in gray and its extreme orbit is shown in green. Earth's orbit around the sun is indicated by blue dots. The objects are not drawn to scale.
Trojans are asteroids that share an orbit with a planet, circling around the sun in front of or behind the planet. Because they ride in the same orbit as a planet, they never cross its path, and never collide with the planet. They circle around stable gravity wells, called Lagrange points.
In the case of 2010 TK7, it has an extreme orbit that takes it far above and below the plane of Earth's orbit. The asteroid's orbit is well defined and for at least the next 100 years, it will not come closer to Earth than 15 million miles (24 million kilometers).
Typically, Trojan asteroids, for example those that orbit with Jupiter, don't travel so far from the Lagrange points. They stay mostly near these points, located where the angle between the sun and Earth is 60 degrees. Asteroids near a comparable position with respect to Earth would be very difficult to see, because they would appear near the sun from our point of view.
WISE was able to spot 2010 TK7 because of its eccentric orbit, which takes it as far as 90 degrees away from the sun. WISE surveyed the whole sky from a polar orbit, so it had the perfect seat to find 2010 TK7. Follow-up observations with the Canada-France-Hawaii Telescope on Mauna Kea, Hawaii, helped confirm the object's Trojan nature.
Image credit: Paul Wiegert, University of Western Ontario, Canada
|
Program animation or Stepping refers to the now very common debugging method of executing code one "line" at a time. The programmer may examine the state of the program, machine, and related data before and after execution of a particular line of code. This allows evaluation of the effects of that statement or instruction in isolation and thereby gain insight into the behavior (or misbehavior) of the executing program. Nearly all modern IDEs and debuggers support this mode of execution. Some Testing tools allow programs to be executed step-by-step optionally at either source code level or machine code level depending upon the availability of data collected at compile time.
Instruction stepping or single cycle also referred to the related, more microscopic, but now obsolete[dubious ] method of debugging code by stopping the processor clock and manually advancing it one cycle at a time. For this to be possible, three things are required:
- A control that allows the clock to be stopped (e.g. a "Stop" button).
- A second control that allows the stopped clock to be manually advanced by one cycle (e.g. An "instruction step" switch and a "Start" button).
- Some means of recording the state of the processor after each cycle (e.g. register and memory displays).
Other systems such as the PDP-11 provided similar facilities, again on some models. The precise configuration was also model-dependent. It would not be easy to provide such facilities on LSI processors such as the Intel x86 and Pentium lines, owing to cooling considerations[dubious ][clarification needed].
As multiprocessing became more commonplace, such techniques would have limited practicality, since many independent processes would be stopped simultaneously. This led to the development of proprietary software from several independent vendors that provided similar features but deliberately restricted breakpoints and instruction stepping to particular application programs in particular address spaces and threads. The program state (as applicable to the chosen application/thread) was saved for examination at each step and restored before resumption, giving the impression of a single user environment. This is normally sufficient for diagnosing problems at the application layer.
Instead of using a physical stop button to suspend execution - to then begin stepping through the application program, a breakpoint or "Pause" request must usually be set beforehand, usually at a particular statement/instruction in the program (chosen beforehand or alternatively, by default, at the first instruction).
To provide for full screen "animation" of a program, a suitable I/O device such as a video monitor is normally required that can display a reasonable section of the code (e.g. in dis-assembled machine code or source code format) and provide a pointer (e.g. <==) to the current instruction or line of source code. For this reason, the widespread use of these full screen animators in the mainframe world had to await the arrival of transaction processing systems - such as CICS in the early 1970s and were initially limited to debugging application programs operating within that environment. Later versions of the same products provided cross region monitoring/debugging of batch programs and other operating systems and platforms.
With the much later introduction of Personal computers from around 1980 onwards, integrated debuggers were able to be incorporated more widely into this single user domain and provided similar animation by splitting the user screen and adding a debugging "console" to provide programmer interaction.
Borland Turbo Debugger was a stand-alone product introduced in 1989 that provided full-screen program animation for PC's. Later versions added support for combining the animation with actual source lines extracted at compilation time.
Techniques for program animation
There are at least three distinct software techniques for creating 'animation' during a programs execution.
- instrumentation involves adding additional source code to the program at compile time to call the animator before or after each statement to halt normal execution. If the program to be animated is an interpreted type, such as bytecode or CIL the interpreter (or IDE code) uses its own in-built code to wrap around the target code.
- Induced interrupt This technique involves forcing a breakpoint at certain points in a program at execution time, usually by altering the machine code instruction at that point (this might be an inserted system call or deliberate invalid operation) and waiting for an interrupt. When the interrupt occurs, it is handled by the testing tool to report the status back to the programmer. This method allows program execution at full speed (until the interrupt occurs) but suffers from the disadvantage that most of the instructions leading up to the interrupt are not monitored by the tool.
- Instruction Set Simulator This technique treats the compiled programs machine code as its input 'data' and fully simulates the host machine instructions, monitors the code for conditional or unconditional breakpoints or programmer requested "single cycle" animation requests between every step.
Comparison of methods
The advantage of the last method is that no changes are made to the compiled program to provide the diagnostic and there is almost unlimited scope for extensive diagnostics since the tool can augment the host system diagnostics with additional software tracing features. It is also possible to diagnose (and prevent) many program errors automatically using this technique, including storage violations and buffer overflows. Loop detection is also possible using automatic instruction trace together with instruction count thresholds (e.g. pause after 10,000 instructions; display last n instructions) The second method only alters the instruction that will halt before it is executed and may also then restore it before optional resumption by the programmer. Some animators optionally allow the use of more than one method depending on requirements. For example, using method 2 to execute to a particular point at full speed and then using instruction set simulation thereafter.
The animator may, or may not, combine other test/debugging features within it such as program trace, dump, conditional breakpoint and memory alteration, program flow alteration, code coverage analysis, "hot spot" detection, loop detection or similar.
Examples of program animators
(In date of first release order)
- IBM OLIVER (CICS interactive test/debug) 1978 - for IBM mainframes
- SIMON (Batch Interactive test/debug) 1980 - for IBM mainframes
- Borland Turbo Debugger 1989 - for PCs
- CodeView 1985, Visual Studio Debugger 1995, Visual Studio Express 2005 - for PCs
- Firebug (Firefox extension) January 2006 - for PCs
- Stepping (Visual Studio) Overview of stepping support in Microsoft Corporation's IDE, Visual Studio
- Tarraingim - A Program Animation Environment
- Program Animation as a way to teach and learn about Program Design and Analysis
- Structured information on software testing (such as the History of Software Testing) published by Testing references
|
Narratives are stories, and most of the Bible uses these stories. They are trying to tell us something. Historical narratives are ways of retelling the past to make sense of the present in a specific intentional way. When we read these stories in the Bible, they are actually operating on three layers. There are three layers of meaning being communicated. There is not some secret, hidden, or uniquely personal meaning. Nor is there a moral lesson to be learned. Instead, there are three distinct but deeply related layers of meaning present within biblical narratives.
The first layer concerns the immediate stories of the individual characters. Abraham and Isaac go up the mountain with a stack of wood but no ram. I will refer to this as the immediate layer.
The second layer is about how later and especially New Testament texts interact with and interpret these immediate layer stories. For example, Matthew 2:15 interacts with and interprets Hosea 11. This concerns the interaction with and fulfillment of the old covenant by the new covenant. I will refer to this layer as the covenantal layer.
The third and deepest layer of meaning that biblical narratives are telling us is God's plan for restoring all of creation to its intended glory. This plan was not fully revealed to Abraham, for example, who believed and had faith that God would faithfully keep his promises, because revelation and history progress towards Christ at their center. I will refer to this layer as the metanarrative layer.
The immediate layer is the many smaller narratives of individuals and groups. These narratives are the material that are used by the covenantal and metanarrative layers. The immediate layer can be a simple story about a single individual or a compound narrative of a string of people like that of Abraham, Isaac, Jacob, and Joseph found in Genesis.
To make sense of this layer, there are five features to which we should pay attention. In trying to make sense of these stories, we need to first flesh out each of these five features.
The first of these features is the narrator. The narrator, although unmentioned in the text, is the person who chooses what to tell us. In biblical narratives, the narrator is 'omniscient,' knowing everything about the story. The narrator does not share all of that or even usually comment on the unfolding story. He often wants to draw you into the story so that you see things for yourself.
The narrator also provides the story's divine point of view. We can learn about God's point of view directly as when the phrase "the LORD was with Joseph" gets repeated fourteen times in the Genesis 39. More often, however, this point of view is disclosed through one of the characters. For example, at the end of the narrative in Genesis 50:20, Joseph tells the reader through his reply to his brothers "You [brothers] intended to harm me, but God intended it for good to accomplish what is now being done, the saving of many lives."
The second of these features are the scenes. Biblical narratives work through scene changes not character development per se. In this way, biblical narratives are a lot like movies or plays. The story gets told through a succession of scenes. Each scene is its own, but it is the action that happens through successive scenes that tells the story. Consider the way the scenes of Genesis 37 work:
Scene 1: Joseph tells on his brothers who hate him because he is their father's favorite son.
Scenes 2 & 3: Joseph has two outrageously tactless dreams that setup the next scene.
Scene 4: Joseph looks for his brothers but does not find them. This pauses the action to create a dramatic entrance in pivotal Scene 5 and to let us know that the timings of Scene 5 are divinely planned. If we miss the connection between the dramatic pause and divine plan the first time through the story, when we remember the oft-repeated "the LORD was with Joseph" phrase in Genesis 39 or the aforementioned conclusion in 50:20 the next time we read through the story, we will get it then.
Scene 5: This is a composite scene in which Joseph enters. His brothers plot to kill him. The Midianites arrive. Interwoven is Reuben and Judah's guilt and plan to sell him.
Scene 6: Joseph ends up in Egypt as the servant of a well-to-do royal official.
Each scene is its own, but they really need to be read in sequential order and all the way through in order to get the story's plotline.
Because these stories are told through scenes, there are not usually many characters involved. So, each character usually counts, but as in Orwell's Animal Farm, some count for more than others. The protagonist or main character often faces an antagonist who is working against him. The agonists are the benchwarmers who come in and out of the story to interact with these two.
Unlike movies, biblical narratives do not dwell on external appearances. When you do run across a physical description, it is almost always important. Instead, these stories use status, profession, and group membership to flesh out characters. Consequently, character development does not occur through the narrator's descriptions but through the actions and words of the characters themselves especially the protagonist. Think about how we learn about the character development of our protagonist Joesph in Genesis 37-50. In the opening scene, Joseph is a spoiled brat. By the end of the story, he is wise, faithful, humble, and loving. We hear that from his words and see that from his actions not the narrator's descriptions.
Secondly, characters are often presented in parallel or by contrast. When they are in parallel, one is usually a reenactment or fulfillment of the other. These instances like John the Baptist being a reenacting of Elijah tell us a lot at the covenantal layer of meaning. You can see a great example of this if you read the first two chapters of 1 Samuel and then the first two chapters of Luke. Hannah is reenacted by Mary.
More often, however, biblical characters are often contrasted with each other. Sometimes this happens by contrasting one group with another group. So, Joseph is contrasted with his brothers right at the start of the story. Then, the character development of both Joseph and Judah draw them closer together by the end of the story.
The fourth feature is the dialogue, because that is where characterization happens. There are three features of dialogue to keep in mind. First, the first chunk of dialogue is often the most important. Consider the opening dialogue of Genesis 37 again. Protagonist Joseph arrogantly and tactlessly tells his dream to his brothers and father. His antagonist brothers set the plot in motion with their hate. Agonist Jacob "kept the matter in mind," which is a frequent narrative clue from the agonist to the reader to do the same. Second, one dialogue is often contrasted with another to get us to pay attention to the difference. Third, important dialogue is emphasized by the storyteller using repetition or long monologues. Given this, resist the temptation to skim dialogue repetitions.
The fifth feature is the repetitive structure of the narrative. These stories were initially not written but told. So, they are designed for a hearer to get meaning from them, which requires repetition. Key words and phrases are repeated as when "the LORD was with Joseph" is repeated fourteen times in Genesis 39. It's almost as if Moses was implicitly asking "Can you hear me now?" Figuring out which words are repeated and why is important to getting what these stories are telling us.
One important structural way repetition occurs in these narratives is through inclusion, which means that the story begins and ends in the same way. Joseph's brothers bow to him both at the beginning and ending of the story. A common and distinct form of inclusion in biblical narrative is the chiasm, in which narratives follow the pattern A B C B A. Another common way this happens is through the use of foreshadowing, in which a brief mention is initially made that is fleshed out later on. Foreshadowing is often used in the covenantal layer to tell the story. Picking up on foreshadowing usually requires multiple subsequent readings of the Bible because the detailed fleshing out usually requires you to remember something you would have easily forgotten from before.
So when we want to understand the immediate layer of a biblical narrative, we should ask questions about and take note of the following things.
|
Homeowner's Guide to Earthquake Hazards in Georgia
By, Leland Timothy Long
School of Earth and Atmospheric Sciences
Georgia Institute of Technology
Atlanta, Georgia 30332-0340
When earthquake hazards are discussed, Georgia is not the first state to be mentioned. Earthquakes in Georgia are rare compared to the long history of damaging earthquakes which are associated with California's active San Andreas Fault zone. Movements along active faults like the San Andreas explain 85% of the earthquakes in the world. The rest are scattered over areas like Georgia that lack clearly defined active faults. These scattered earthquakes in Georgia the eastern United States have caused significant damage and can be an important consideration for homeowners.
Because earthquakes are less frequent in the eastern United States than in California, we are not constantly reminded of our seismic hazard by frequent small earthquakes. Never-the-less the historical record of earthquakes in the southeastern United States and Georgia (figure 1) makes it clear that earthquakes and their associated seismic hazards exist. Damages from eastern United States earthquakes are largely forgotten because the last great earthquake was over 100 years ago. The 1886 Charleston, S.C., earthquake killed nearly 60 people and devastated the city. Also, some seismologist argue that while earthquakes in the eastern United States are less frequent, the large earthquakes cause damage over much larger areas and would affect more people than earthquakes of similar size in the western United States. In Georgia, calculations of seismic hazard indicate that large earthquakes outside our borders are as likely to cause damage in Georgia as earthquakes of any size occurring within Georgia.
The map of Georgia shows the location of all earthquakes that are known to have occurred within 25 km (15 mi.) of Georgia. The earthquakes across northwestern Georgia are part of the Southeastern Tennessee Seismic Zone (STSZ) that extends northeast through Knoxville. The STSZ lies primarily in the Valley and Ridge Province of the Southern Appalachians. Earthquakes in the STSZ are at depths of 14 ± 10 km and do not appear to be correlated with surface geology or near-surface faults. On the basis of seismicity, the STSZ is second only to the New Madrid Seismic Zone in the eastern United States for its size and rate of earthquake production. Earthquakes in the Blue Ridge and Piedmont Provinces occur in clusters with notable concentrations near reservoirs such as Lake Sinclair and Clarks Hill. They may occur any place that has unweathered and slightly fractured granitic rock near the surface. The small Piedmont earthquakes are unique in that they represent movements along shallow fractures that have been weakened, perhaps by penetrating fluids or weathering. Few earthquakes are known to occur in the Coastal Plane Province of South Georgia.
Small earthquakes are often no more alarming than the many vibrations that originate near or in a home, such as thunder, heavy trucks nearby, sonic booms, objects falling, and unbalanced washing machines. These all have unique characteristics that we learn to recognize through experience. Likewise, we can identify the small magnitude 2.0 or less earthquakes and explosions of a similar size by their unique characteristics. They usually start with a jolt, build rapidly in amplitude within a couple of seconds and then decay. The total felt duration of the typical small Georgia earthquake is usually less than 10 seconds and it sounds like a muffled dynamite explosion. Also, the rattling of loose objects may generate earthquake sounds. The typical small earthquake will be felt by many within 15km (10mi.) so that consultation with neighbors should eliminate most non earthquake sources within the home. The events in northern Georgia are 8 to 15 km deep and do not shake the surface as hard as the Piedmont earthquakes that are typically within 2.0 km of the surface. In the Piedmont, the earthquakes may occur as part of a 2 to 4 month long swarm, such as in the Norris Lake Community swarm of 1993. The time during a swarm is a good opportunity to eliminate hazards that could cause damage and injury during an earthquake, particularly because earthquake swarms are often followed by isolated events as large as the largest event in the swarm.
If homeowners live near a quarry, they may be very familiar with vibrations from blasts that feel very much like small earthquakes. If the homeowner feels vibrations that seem unusually large or occur at night, the homeowner could be experiencing earthquakes. If unusually large vibrations are from quarry blasts, usually occurring during the lunch hour or in the late afternoon during the week, the homeowner may contact the State Fire Marshal obtain help in determining if the vibrations exceed the legal limit.
Moderate and Larger earthquakes are usually immediately identified because they are both recorded by regional networks and felt by people who have previously experienced an earthquake. Most transplants from California identify these earthquakes immediately.
Moderate earthquakes are those with magnitudes about 4.0. These will be noticed by almost everyone in the epicentral area and will be felt as far away as100 miles. The news media will usually be quick to distribute information on the identity and size of these earthquakes. Some weak structures may experience minor damage, such as cracked plaster and items knocked off shelves. In rare incidences there may be some minor structural damage, such as cracks in cement blocks or brick facing falling off buildings. The Piedmont and northwestern Georgia each experience about one magnitude 4.0 event every 10 years.
Large earthquakes are those with magnitudes about 5.5. These will cause widespread minor damage in well-built structures. A few structures will suffer major damage and could require examination for safety, but these will be rare. Life-threatening situations would be restricted to the immediate epicentral zone and to weak structures that are located on poor foundation material. These events will be felt from 100 to 500 miles away. As with moderate earthquakes, the news media will distribute information about the felt area and damage. In the eastern United States water heaters and furnaces are not routinely protected against being knocked over and these could start fires.
Damaging and great earthquakes are those with magnitude 6.0 and larger. The Charleston, 1886, and New Madrid, 1811-12, earthquakes are of this size and have caused as much damage in Georgia as the earthquakes occurring within Georgia. The probability for a magnitude 6.0 or larger earthquake somewhere in the eastern United States is about 61% in the next 25 years. We have experienced one magnitude 7.0 once every hundred years in all the eastern United States. There is but only one chance in 1000 per year for a magnitude 7.0 in Georgia. These distant earthquakes provide the greatest threat. The damage in Georgia would be similar the damage caused by the 1886 Charleston earthquake, if the event occurred in a neighboring state. Near the epicenter the damage would be like that experienced in Charleston In 1886 or in the "World Series" California earthquake on October 18, 1989. The Charleston earthquake nearly devastated the city, and killed about 60 people. One should expect extensive damage in a radius of 10 to 30 miles of the epicenter. Outside this zone of major damage to a distance of 150 miles the damage will be moderate. Buildings may be damaged sufficiently to collapse in the larger aftershocks. Many people will be displaced from their homes. Transportation may be interrupted by broken rail lines and bridges. Chimneys will be knocked over, windows broken and plaster cracked. The four New Madrid earthquakes of 1811 and 1812 were the largest of their type in the world. The Mississippi River changed its course, the land surface sunk to form new lakes and the violent shaking snapped off trees. At the time settlements were sparse and limited to sturdy log cabins. A similar event today, perhaps in southeastern Tennessee, could generate extensive damage to all of Georgia.
Seismic monitoring in the United States is coordinated by the United States Geological Survey (http://gldage.cr.usgs.gov). The USGS maintains station GOGA. In Georgia, The Georgia Institute of Technology maintains a small network including station ATL just south of Atlanta. The University of Tennessee (http://tanasi.gg.utk.edu), University of Florida, University of North Carolina at Chapel Hill, and the University of South Carolina maintain seismic stations surrounding Georgia. Also, the Center for Earthquake Research and Information at Memphis State University(http://www.ceri.memphis.edu/) maintains a southern Appalachian Regional Network. Worldwide earthquakes are posted as they occur at http://www.iris.edu. There also exists a growing number of amateur seismologists and high schools that maintain working seismographs. High schools sponsored by Georgia Tech that will soon be recording earthquake data in 1999 are shown in figure 3 as a "*" with existing seismic stations.
Our current understanding of the earthquake process is limited and routine use of scientific data to predict the location, time and magnitude of an earthquake is unlikely in the near future. Earthquake hazards instead are estimated from the statistical properties of earthquakes, which are assumed to remain constant through time. Unfortunately, the length of our historical record is too short to give us confidence that every seismic zone has been identified or that the potential of each seismic zone to generate a large earthquake is known. In fact, most of the major historical earthquakes in the eastern United States occurred in areas that were not known for prior historical seismic activity. According to recent studies, significant seismic zones like that at Charleston, S.C. often show evidence of major earthquakes over periods of hundreds or thousands of years. Hence, the seismic zones that are active today have an increased probability of being the location of the next major earthquake; but the next earthquake could surprise us and occur outside of these currently active zones.
Although there is some disagreement and uncertainty among seismologist on how to estimate future seismicity, most agree that statistical estimates of historical seismicity provide the best measure of seismic hazard available today. Consequently, the historical seismicity was used as the basis for the new hazard maps being prepared by the United States Geological Survey. In these maps, earthquake hazards are expressed in terms of the level of vibration that has a given probability of being experienced during some time period. The example in Figure 4 defines the level of vibration (in percentage of the acceleration of gravity) that has a 10 percent probability of occurring in 50 years based. A 10 percent probability in 50 years is equivalent an average of once every 450 years. These USGS ground-shaking maps will be used by the Building Seismic Safety Council in its revisions to the seismic risk maps that will be adapted for use in State and local building codes. The hazard indicated by these maps is the greatest in northwest Georgia, decreases in the Piedmont and is minimal in the Coastal Plane. Except for the 1976 Reidsville, GA. event, earthquakes in the Coastal Plane are infrequent and limited to a few questionable historical accounts. The hazard is greater toward South Carolina, showing the influence of the continuing activity near Charleston, South Carolina.
The best insurance against earthquake damage is to eliminate those hazards in the home that could cause significant damage during an earthquake. An earthquake rider on your home insurance policy could reduce the impact of financial losses in an earthquake. To be effective in Georgia, earthquake riders should protect the homeowner against the most likely damage expected from a small or distant earthquake, such as the failure of brick facing experienced by a homeowner in a small earthquake near Lake Sinclair. These riders vary in price depending on the deductible and company pricing practices. Clearly, a high deductible would protect only against the very rare large earthquake.
When the earth shakes in an earthquake, falling objects can cause injury or start a fire. Many of the hazards associated with falling objects can be eliminated or minimized now, before an earthquake strikes. The following checklist can help earthquake proof your home.
During an earthquake "Duck, Cover and Hold"
Steps to follow after a large earthquake
Bolt, Bruce A., (1993). Earthquakes, W.H. Freeman and Company, New York, New York, 331 p.
Building Seismic Safety Council, (1995). A nontechnical explanation of the 1994 NEHRP recommended provisions, Federal Emergency Management Agency publication 99, 82p.
Frankel, A., (1995) Mapping Seismic Hazard in the central and eastern United States. Seismological Research Letters, Vol 66, No. 4., pp 8-22.
Slemmons, D.B., Engdahl, E.R., Blackwell, D., and Schwartz, D., (1991) Neotectonics of North America, Decade Map Volume, The Geological Society of America, Boulder, Colorado, 493p.
|
Randomized controlled trial (or randomised control trial; RCT) is a type of scientific (often medical) experiment, where the people being studied are randomly allocated one or other of the different treatments under study.
Usually, the randomized trial is the appropriate study to verify efficacy because it provides greater control of the possible confounding variables. Effectiveness is the capacity to reproduce and obtain the same results within the medical community, using different centers and with professionals who have distinct degrees of experience. The observational study is generally used, because it selects patients with more heterogeneous characteristics and centers with different expertise.
(or randomised control trial; RCT) is a type of scientific (often medical) experiment, where the people being studied are randomly allocated one or other of the different treatments under study. The RCT is the gold standard for a clinical trial. RCTs are often used to test the efficacy or effectiveness of various types of medical intervention and may provide information about adverse effects, such as drug reactions. Random assignment of intervention is done after subjects have been assessed for eligibility and recruited, but before the intervention to be studied begins.
Experimental study designs can provide the evidence needed to answer pertinent clinical questions. To study the efficacy of a treatment, there needs to be a control group, ideally in the context of a randomized controlled trial (RCT).
Random allocation in real trials is complex, but conceptually, the process is like tossing a coin. After randomization, the two (or more) groups of subjects are followed in exactly the same way, and the only differences between the care they receive, for example, in terms of procedures, tests, outpatient visits, and follow-up calls, should be those intrinsic to the treatments being compared. The most important advantage of proper randomization is that it minimizes allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments.
The terms “RCT” and randomized trial are sometimes used synonymously, but the methodologically sound practice is to reserve the “RCT” name only for trials that contain control groups, in which groups receiving the experimental treatment are compared with control groups receiving no treatment (a placebo-controlled study) or a previously tested treatment (a positive-control study). The term “randomized trials” omits mention of controls and can describe studies that compare multiple treatment groups with each other (in the absence of a control group).
Similarly, although the “RCT” name is sometimes expanded as “randomized clinical trial” or “randomized comparative trial”, the methodologically sound practice, to avoid ambiguity in the scientific literature, is to retain “control” in the definition of “RCT” and thus reserve that name only for trials that contain controls. Not all randomized clinical trials are randomized controlled trials (and some of them could never be, in cases where controls would be impractical or unethical to institute). The term randomized controlled clinical trials is a methodologically sound alternate expansion for “RCT” in RCTs that concern clinical research; however, RCTs are also employed in other research areas, including many of the social sciences.
Phase IV: Studies are done after the drug or treatment has been marketed to gather information on the drug's effect in various populations and any side effects associated with long-term use.
Trials registered from 2000 to 2012 were identified on the website clinicaltrials.gov using a range of key words related to neurosurgery. Any trials that were actively recruiting or had unknown status were excluded. Included trials were assessed for whether they were discontinued early on the clinicaltrials.gov database; this included trials identified as withdrawn, suspended, or terminated in the database. For included trials, a range of parameters was identified including the subspecialty, primary country, study start date, type of intervention, number of centers, and funding status. Subsequently, a systematic search for published peer-reviewed articles was undertaken. For trials that were discontinued early or were found to be unpublished, principal investigators were sent a querying email.
Sixty-four neurosurgical trials fulfilled our inclusion criteria. Of these 64, 26.6% were discontinued early, with slow or insufficient recruitment cited as the major reason (57%). Of the 47 completed trials, 14 (30%) remained unpublished. Discontinued trials showed a statistically significant higher chance of remaining unpublished (88%) compared with completed trials (p = 0.0002). Industry-funded trials had a higher discontinuation rate (31%) compared with non-industry-funded trials (23%), but this result did not reach significance (p = 0.57). Reporting of primary outcome measures was complete in 20 (61%) of 33 trials. For secondary outcome measures, complete reporting occurred in only 11 (33.3%) of 33.
More than a fifth (26.6%) of neurosurgical RCTs are discontinued early and almost a third of those that are completed remain unpublished. This result highlights significant waste of financial resources and clinical data 3).
|
Cyclopædia of Political Science, Political Economy, and the Political History of the United States
EMANCIPATION, Political and Religious. To emancipate a class of persons is to deliver them from the inferior condition in which they were held and give them the equal rights of citizens. Equality is a natural right. Civil society was established for the purpose of acquiring and preserving it, by putting an end to the abuses of force, the cause of inequality. It is, therefore, a violation of the principle on which society is based to establish or recognize in a state different orders of persons, some of whom enjoy the full rights of citizenship, while others are reduced to a state of subjection. So long as they bear the same burdens, and perform the same duties, all should enjoy the same rights and political advantages.
—This truth is not a new one in the world. Christianity laid down the principle that every man, by the fact alone of his being a man, had the same dignity and the same right to justice and liberty. But how many ages were needed for the ideas introduced into the world by Christianity to germinate and bear fruit! For fully nineteen centuries, difference of religion, of class, color and nationality, continued to serve as pretexts to oppress and deprive of legal protection a more or less considerable part of the population of every country. Freedom of the individual, of conscience and civil equality, are of very recent date.
—It is not a hundred years since Rousseau could justly reproach Frenchmen with assuming the title of citizens without even knowing the meaning of the term, and remind them that the name of subjects was far better suited to most of them. In England the Catholics have enjoyed civil equality only since 1829. The Jews won the right to sit in parliament only in 1859. In France the traces of hate and prejudice of which they were victims have disappeared only since 1830, and the emancipation of Protestants in that country dates only from the revolution. There were serfs in France in 1789, and it required a second or rather a third revolution (1848) to solve the question of slavery. In another order much time was required to put the principles of liberty and social equality into practice with regard to French colonies, for France admitted them to the enjoyment of political rights only in 1870, and she still imposes restrictions on their commerce (though less than formerly), the results of which are injurious to them as they are of doubtful utility to the mother country;—The causes of civil inequality lie in the ignorance of misunderstanding of the natural rights of man. It was when it might be said that the human race had found its true title deeds that these causes lost their influence. The honor of this belongs to the philosophy of the eighteenth century. By preparing the triumph of philosophic reason over religious fanaticism and the final destruction of the feudal system, it became the most active agent of emancipation.
—But, as has been frequently remarked, the ideals of the eighteenth century have been surpassed in our time. As always happens, beyond the progress made, there were still other kinds of progress whose possibility was not at first suspected. Thus Voltaire did not even dream of putting Protestants in France, and still less the Jews, on the same footing with Catholics. He admitted that public offices and employments might be refused them. He found in this monstrous inequality merely a necessary fact, a condition inherent in the social state. In France, in his time, non-Catholics themselves did not dare to lay claim to a share in political life. When the constituent assembly declared, Aug. 21, 1789, that all citizens, being equal in its eyes, were equally eligible to all public places, employments and dignities, non-Catholics were implicitly excluded from the equality thus proclaimed, so that it was necessary, a few months later, to issue a special decree providing that non-Catholics were eligible for all civil and military employments as well as other citizens. The preamble announced, in addition, that the assembly did not decide anything relative to the Jews, the consideration of whose case it reserved. (Decree of Dec. 24, 1789.) So that in laying down the principle of absolute equality, it limited its action to freeing the non-Catholics from persecution.
—This inconsistency is explained easily enough. The chief object of the philosophical controversies of the eighteenth century had been freedom of conscience; but the question had not yet been considered from the purely political point of view. There still existed a state religion in France, and the majority of the constituent assembly wished to maintain it. But the existence of a dominant religion naturally excludes dissidents from offices and public employments.
—The French revolution, which, more than anything else, had the unity of the country in view, was not slow in comprehending that this unity, the source of national power, could not be effectually acquired unless civil equality were granted to all; and by according full and complete equality to dissidents it performed not merely an act of justice, but took a wise political measure.
—Historians have told us what the revocation of the edict of Nantes cost France; but no one, so far as we know, has calculated the material and moral gain to regenerated France, from its proclaiming the equality of religions before the law.
—English statesmen were not mistaken here. The duke of Wellington and the tories associated with him in power in 1829, were not inclined to yield exclusively to the influence of philosophical ideas. When, notwithstanding their antecedents and their personal dislikes, they decided to propose Catholic emancipation, it was because they felt that this was the price of the moral unity of Great Britain, that the sentiment of common liberty and civil equality was the only one in which Ireland could sympathize with England, and the agitation and continual strife would cease only through one of two means; the extermination of emancipation of Catholics. Subsequent events have shown that they were right. England, freed from one cause of internal dissension, regained a liberty of action which contributed to insure her preponderance in Europe, during the years which followed 1830.
—From this experiment and many others we may deduce the principle, that a nation grows in power, in activity, in fruitfulness, in proportion as the same law is in force for all in the broadest and most liberal manner. In France national power has increased in direct ratio to the progress of civil equality; the history of its growth is identical with that of the emancipation of the third estate and the abolition of serfdom. Here, again, humanity and policy were at one. Humanity demonstrated that serfdom—that is to say, to have men attached to the soil, identified with it, looked upon as feudal property, unable to dispose of their goods, unable to leave to their own children the fruit of their labor—was unworthy of a generous nation; and policy showed "that such arrangements are only fitted to enfeeble industry and deprive society of the effects of that energy in labour which the feeling of property is alone capable of inspiring."—These motives, by which Turgot, in 1779, justified the abolition of serfdom in the domains of the king, are the same which were destined to lead to the emancipation of slaves. England had preceded France in this work of emancipation. After Aug. 1, 1838, there were no slaves in the English Antilles, when the provisional government in France decreed immediate and complete emancipation. Every one, beyond a doubt, was agreed on the principle; but on the eve of the resolution of February, the idea of gradual abolition still prevailed, and unconditional abolitionists, who placed humanity and justice before all things, were in a minority.
—EMANCIPATION OFCATHOLICS. In Great Britain. The term Catholic emancipation was given to the act by which the Catholics of the United Kingdom were freed from the political disabilities which excluded them from parliament and all high offices of state; but this act itself was only the completion and consequence of a series of measures intended to restore to the Catholics of England and Ireland the rights of property and individual liberty of which they had been deprived in consequence of the reformation in Great Britain, or rather of the struggles which followed it. Henry VIII., when he separated from the Catholic church, retained its dogmas and discipline. It was only under his successor, Edward VI., that the Anglican church decided in favor of the reformation, which finally triumphed during the reign of Elizabeth after a bloody reaction under Queen Mary. From this period the persecution of Catholics became regular, and assumed a legal form; the basis of all the penal laws which followed, are found in the acts of uniformity and supremacy. The act of uniformity prohibited the use of any liturgy but that of the official church, under pain of confiscation for the first offence, imprisonment for a year for the second, and imprisonment for life for the third. A fine of one shilling was imposed for absence from the state church on Sundays and holidays. By the act of supremacy every beneficed clergyman and every layman in the employment of the crown was obliged to abjure the spiritual sovereignty of the pope and recognize that of the queen, under penalty of high treason for a third offense.
—These penal laws soon became more severe. In 1593 the penalty of imprisonment was pronounced against all persons above the age of sixteen who should fail for one month to appear at church, unless they made an open act of submission and a declaration of uniformity. Catholics filled the prisons. They were ruined by fines or left the country. There were hunters of Catholics who tracked the fugitives.
—Under James I. new statutes deprived the Catholics of the control and education of their children; but while parliament imposed these penalties, the king, personally favorable to Catholics, procured them some tranquillity. This condition of the relative quiet continued under Charles I. and Cromwell, and the penal laws were not enforced till tge restoration of CharlesII. Under his reign, and notwithstanding his sympathies for the Catholics, the testact was passed. It declared all persons incapacitated to fill any public office who refused to renounce the doctrine of transubstantiation (1673).
—In 1679 the Catholics, already excluded from the house of commons, were also excluded from the house of lords. Finally, after the revolution of 1688, though William of Orange was disposed to toleration, Anglican fanaticism ruled without control. Priests were forbidden, under pain of imprisonment for life, to celebrate mass or exercise their functions in England, unless in the house of an ambassador. A priest in countries subject to the crown of England was considered guilty of high treason unless he had taken the oaths of supremacy and uniformity. All persons furnishing him an asylum were guilty of felony, without benefit of clergy.
—Laymen professing the Catholic religion and refusing to assist at the services of the established church, incurred, besides the pains and penalties mentioned above, the loss of their right of exercising any employment, of possessing landed property after the age of eighteen years, and of having arms in their houses. They were forbidden to come within eighteen miles of London, or to go farther than five miles from home without permission. Women might be detained in prison if their husbands did not ransom them; they lost a portion of their dowry. A catholic could not bring a case at law; and a wife could neither be the heir nor the testamentary executor of her husband. Marriages, burials, baptisms, could be officiated at only be a clergyman of the official church—The situation of the Catholics in Ireland was still more frightful. There also the acts of uniformity and supremacy had been forced on the people by the prison and the scaffold. But four-fifths of the population were and wished to remain Catholic. The struggle was prolonged into a war of extermination. Defeated at the battle of the Boyne(1690), the Catholics signed the treaty of Limerick. It was agreed that Roman Catholics should retain the exercise of their religion as under the reign of Charles II., and the king agreed to obtain the most ample guarantees for them. These were refused by parliament. The Anglican bishop of Meath justified this breach of faith by proving, in a sermon preached before the lords justices, that Protestants were not bound to observe the peace concluded with the papists.
—A new parliament, convened in 1695, undertook as its first work to ascertain the condition of the penal laws. A committee appointed for this purpose reported that the principal ones were: 1st, an act requiring the oath of supremacy for admission to all employments; 2nd, an act imposing fines for absence form the services of the established church; 3d, an act authorizing the chancellor to appoint a guardian to the child of every Catholic; 4th, an act prohibiting instruction to Catholics. This legislation served as a point of departure for other acts which expelled Catholic priests and prelates, deprived parents of the right to instruct their own children except through Protestant masters, ordered the general disarmament of Catholics, excluded them from public employments, and repealed the laws which confirmed them in the enjoyment of their property. All this was done at a time when England received the Protestants driven from France, and conferred on them the rights of citizenship. At this period there were three or four millions of Irish Catholics, but, as far as the law was concerned, they existed no longer. It did not recognize that there were, in Ireland, any citizens but Protestants. Things thus continued during the first two thirds of the eighteenth century, so that the earliest events which were the incentive to emancipation were purely political. These events were a consequence of the ideas of independence and national interests common to all the inhabitants of Ireland, propagated by the Protestant, Swift, and before him by Molyneux.
—In 1773 the Catholics esteemed as a great favor an act which, without changing the penal laws in any way, permitted them to take a new oath as a pledge of their loyalty. This act implicitly recognized their existence. About the same time a Catholic committee was formed.
—The spirit of the time had changed, George III., in his zeal for Anglicanism, upheld the penal laws, but parliament practiced toleration in spite of the king, as at a former time it had been intolerant in spite of William III. In 1778 it was provided, on motion of Sir George Saville, 1st, that Catholic priests discovered performing their functions should no longer be subject to the penalties of high treason; 2d, that a son, by accepting the Protestant religion, should no longer be able to despoil his father; 3d, that the power of acquitting property by purchase, inheritance or gift should be restored to Catholics. Nevertheless, at the end of the eighteenth century these just measures excited among English Protestants the most formidable insurrection. On May 30, 1780, 60,000 men, under the leadership of Lord George Gordon, a half-crazy fanatic, besieged the houses of parliament; repulsed by the military they wrecked the housed of the principal members of parliament, attacked and burnt the prisons, assassinated Catholics, and were the cause of a frightful conflagration in the city. When order was re-established, parliament limited itself to furnishing some explanations intended to satisfy public opinion on the interests of the Protestant religion. Things remained as they were before the insurrection.
—The example given in England was followed in Ireland. In 1778 a bill was passed which permitted Catholics to teach and exercise guardianship over their own children. The privilege of living in Limerick or Galway was restored to them. The prohibition of owning a horse worth more than five pounds sterling was done away with. From 1790 to 1793 several bills in succession permitted Catholics to engage in the profession of the law, to receive apprentices, to occupy positions in the army as high as colonel inclusive, to have arms on condition of possessing property of a certain value, to be members of a grand jury and justices of the peace, to hold subordinate civil positions, and, which was of great value, to vote at elections. These acts did away with the obligation of attending Protestant service, even authorised Catholic priests, under certain restrictions, to celebrate mass, and removed the remnant of the restraints on acquiring and holding property. The benefit of these laws was acquired by taking an oath, renouncing allegiance to the pretender, and disavowing the doctrine that contracts with heretics may be broken, and that princes excommunicated by the see of Rome may be deposed and put to death.
—When the pact of parliamentary union was established in 1798 between Ireland and England, the latter promised, as a compensation, to abolish all remaining political disabilities. George III, refused to keep the promise of his minister, and William Pitt resigned his office. Thus deceived, Ireland had the courage to employ only legal means to assert her rights. Under the direction of John Reogh, and soon after of O'Connell, the Catholic association was able to arouse and support one of those great movements of public opinion which, in enlightened and free countries, prepare and necessitate the regular change of institutions. A continually growing minority in parliament were in favour of emancipation. It might have been believed, in 1813, that the cause was about to triumph. The bigotry of George III. had become a characteristic folly, and his successor showed more generous tendencies—The condition of the Catholics of England was improved in the same degree as that of their co-religionists in Ireland. Instead of following, in all its details, the gradual abolition of the restrictions and penalties imposed on them, we shall describe the condition of both on the eve of Catholic emancipation. A Catholic could sit neither in the house of lords nor in the house of commons, he was excluded from every judicial office; the higher grades of service in the army and navy were opened to him by law only since 1816; he had no voice in the vestries, though these assemblies had the right of imposing heavy taxes; he could be neither governor nor director of a bank, nor occupy a number of other honorary or lucrative offices. If a Catholic in Ireland did not possess a freehold of a hundred pounds a year, or personal property to the amount of a thousand pounds, he had not the right to keep arms, he was subject to domiciliary visits, and in certain cases to imprisonment, to the pillory or to flogging; he was excluded from certain occupations, such as that of gamekeeper and gunsmith. If a Catholic died without having appointed a guardian to his children, the chancellor had the right of setting aside the nearest relatives and appointing a Protestant stranger. If a Catholic corresponded with the pope, he became guilty of high treason. Catholic endowments, either charitable or benevolent, were expressly forbidden. A Catholic priest who, even by mistake, should marry a Catholic and Protestant, incurred capital punishment. A Catholic priest was liable to imprisonment if he refused to make known the secrets of the confessional before a court of justice. Finally, to retain their property, to practice their religion, in one word, to profit by all the favorable acts passed since 1778, Catholics were obliged to take the oath of fidelity and renounce the temporal authority of the pope. This résumé does not include certain regulations more vexatious than important, such as the prohibition of pilgrimages, the duty imposed on magistrates of destroying Catholic crosses, paintings and inscriptions.
—Such was the legal position of four or five millions of citizens. We have stated, in the introduction to this article, how the duke of Wellington's government was led to put an end to this state of affairs. On March 5, 1829, Robert Peel laid before the house of commons the emancipation bill, entitled. An act for the relief of his majesty's Roman Catholic subjects. Neither the rage of the Protestant party of 1780, nor the enthusiasm of the French revolution, were witnessed at the time, the measure was proposed and voted as a political expedient. The danger of internal dissension, the necessity of decreasing the influence of the priests, as well as of dissolving the Catholic association by granting what it asked for, and the impossibility of continuing the struggle longer, were the motives that the ministry brought to bear. The bill passed the house of lords, by 212 votes against 112, in spite of the opposition of certain bishops, and was finally adopted April 13, 1829. The act or bill of emancipation (Act 10, George IV., chap.vii.) formally abolished all preceding laws, not, however, without certain reservations. Thus, every Catholic could be a member of the house of lords or commons, on condition of his taking an oath of fidelity to the king and the Protestantdynasty, instead of the oath of supremacy and abjuration; of his declaring that he did not consider it an article of faith that princes excommunicated by the pope might be deposed and put to death by their subjects; of his recognizing that the pope had neither civil power nor jurisdiction in the kingdom, and promising to maintain the established church in its privileges and property. By taking the same oath Catholics were allowed to vote at elections for the house of commons, and were eligible to civil and military employments, with the exception of the office of lord chancellor of England or of Ireland, lord lieutenant of Ireland or high commissary to the general assembly of the church of Scotland. Roman Catholics might become members of lay corporations, on condition of taking the above oath and such other oaths as should be required of the members of these corporations, but without being able, while sitting in the same corporations, to cast a vote on questions of presenting an ecclesiastical benefice. No particular oath was required to enable Roman Catholics to possess personal property or real estate, nor for their admission to the army or the navy. The bill at the same time contained a clause directed against O'Connell, elected from the county of Clare. who generously sacrificed his interest to the success of the common cause. The property qualification for voters was raised, in Ireland, from forty shillings to ten pounds, which did not, however, prevent the great agitator from entering parliament.
—The emancipation act was justly considered a great boon. The London Timesremarked that hitherto the union of the three nations was merely nominal; there could be no harmony between the serf and his master, between the suspicious oppressor and his victim. Catholic emancipation was a victory whose consequences would be so many benefits for the remotest generation, for it brought peace and happiness to Ireland and was an element of strength and dignity for Great Britain. Experience has confirmed all this.-In Other Countries. We could not well think not well think of reproaching the pope, when still in possession of the temporal power, with depriving non-Catholics of all political and even civil rights. Civil equality was not compatible with the nature of his government. But we are astonished that in liberal Holland Catholics were so long excluded systematically from the employ of the government in spite of the law of 1798 which emancipated them; that in Sweden, a country where Protestantism is dominant, that is to say. where the right of each one to account only to himself for his faith is recognized dissidents are still excluded from public offices, and citizens professing the state religion are forbidden under penalty of perpetual banishment to embrace another religion.
—It is remarkable that the pretext for the first invasion of Poland in 1768 was the emancipation of the Ruthenians of the Greek rite whom the Catholics held in an inferior political condition. At present, Russia is endeavoring to impose on Polish Catholics the orthodox religion in order to attach them to the throne of the czar and make them forget their own nationality, but we know that every step taken in such a direction leads from the desired end. After similar acts of violence committed in France against the Protestants the only result was to obtain apparent conversions and make the two nations almost irreconcilable.
—In Russia proper, atrocious persecutions were carried on from 1832 and 1855, to favour the progress of the dominant religion. According to Dupretz(Revue des Deux Mondes, 1850, vol. i.) more than five millions of United Greeks, or Greek Catholics, were obliged to join the Russian church. In giving an account of the means employed to effect this end we do not find measures tending to abolish civil equality between the dissidents and the orthodox, and this is easily understood in a country in which the whole nation was subject to the machinery and the external forms of a military government. Resource was had, therefore, to other means; for instance, a ukase of Jan. 2, 1839, granted complete amnesty to persons condemned for robbery or murder, to the knout, to mines, or to the galleys, sa a reward for conversion. Another ukase of March 21, 1840, decreed that every person who should leave the orthodox religion would lose the administration of his own estates, that he could not hold orthodox serfs in his service, etc. The measures decreed under Nicholas I. had nothing of the generous ideas of emancipation which the Russian government applied under Alexander II. to forty millions of his subjects in the question of serfdom. Neither did it in any way resemble the toleration professed by Catherine II., which Voltaire, with a complaisance for which he has been blamed, praised too highly. The illustrious philosopher was scarcely more in the right when, to satirize the morals of Europe, he delighted in lauding the followers of Confucius. Better informed in our time he would, no doubt, have applauded article thirteen of the treaty of peace and alliance concluded at Pekin in 1860, which abolished all penalties and disqualifications affecting Christians in China. But perhaps he would have been less pleased with the clause binding the Chinese government to accord missionaries effectual protection, a protection which appears to be of another kind than that guaranteed to travelers and merchants. He would at least have observed that the conditions of just reciprocity would impose on the French government the obligation of extending a special and effectual protection to bonzes who should try to convert us to the most ancient religion of Asia. It is well to emancipate the members of Christian communities, but for them, as for all others, equality should be the rule.
—EMANCIPATION OF PROTESTANTS. In the general reaction in France which followed the death of Louis XIV., the regent thought of recalling the Huguenots. This inaccurate expression, which was frequently employed in the eighteenth century, was employed to mean the recalling of Protestant refugees to France, and the giving of a civil status to those who had remained in France. Saint-Simon boasted of having made the duke of Orleans abandon this project : he admitted, however, that the legislation of Louis XIV., so harsh toward Protestants, was confused and contradictory, and caused the government frequent embarrassments, especially in questions of marriage and wills. The traditions of the administration had more weight with the regent than the opinion of Saint-Simon. These traditions were represented and upheld especially by a family formerly protestant, that of Phelyppeux, which during almost two whole centuries furnished secretaries of state, under the names of Pontchartrain, Saint-Florentin, Maurepas, La Vrillière. The count of Saint-Florentin, in particular, during a ministry of fifty-two years devoted himself with a rare degree of bureaucratic stubbornness to keeping the protestants under the yoke.
—The honor of having given the first impulse in France belongs to Voltaire. Immediately after a renewal of persecution in the city of Toulouse, noted for the tortures of the pastor Rochette, of the three brothers Grenier, accused of wishing to liberate him, and of Jean Calas, Voltaire called attention to the condition of the Protestants of France, by the success of his efforts, continued during three years, to reverse the decision in the case of Calas, and during nine years in that of Sirven. With the aid of the Duke de Choiseul he endeavored to found at Versoix a manufacturing town whose clock making should rival that of Geneva, and where Protestant workmen should not only enjoy civil rights but even freedom of worship. Voltaire encouraged with all his power writers of his own school and certain tolerant magistrates to publish mémoireson the civil condition of the Protestants, and particularly on the necessity of recognizing their marriages. Rippert de Monclar, Turgot, Target, Condorcet, Gibert de Voisius, Robert de Saint-Vincent, and especially Malesherbes, pleaded the cause of tolerance. Several lawsuits added to the effects of the mémoires. By the law every marriage celebrated according to the reformed rite was null and void. The children born of such a marriage were illegitimate and incapable of inheriting. so that any collateral relation, no matter how distant, might lay claim to the estate of a Protestant provided the claimant was a Catholic or became one. At the end of a century this odious system had introduced inextricable confusion into the situation of 300,000 families, who were without any civil status. The government thus found itself more and more embarrassed from such a state of things. The advent to the ministry of certain tolerant men like Choiseul, and, later, Castries, Breteuil, and especially Turgot and Malesherbes, was calculated to improve the condition of things. Louis XVI. desired to put an end to the disorder by a spirit of kindness and justice. Turgot states that at the moment of his consecration the new king, instead of pronouncing the words obliging him to exterminate the heretics, muttered some confused words, which accords very well with the mixture of generous intentions and weakness which characterized this unfortunate prince.
—The end of persecution was brought about by a more resolute man whose name marks the advent of modern society in France. Lafayette, who had become intimately acquainted in America with Protestantism and the practice of religious liberty, wrote to Washington on May 11, 1785, that he was resolved to take up the cause of his Protestant countrymen, and his illustrious friend encouraged him in this design worthy of them both. Lafayette undertook to examine in person the principal centres of the Protestant population. For this purpose he went to Nimes and attended the Protestant worship in the open air, conducted by Rabant-Saint-Etienue. After the service Lafayette embraced the pastor and engaged him to come to Paris to labor in obtaining civil liberty for his co-religionists. This was the beginning of the political career of Rabaut-Saint-Etienne. His expenses were paid by a subscription, made by the Protestant churches of Nimes, Montpellier, Marseilles and Bordeaux. He came to Paris under the Pretext of publishing his Lettres à Bailly sur Vhintoire primitive de la Grèce. Introduced by Lafayette into Parisian society and to the ministers, the future president of the national assembly was received with curiosity and interest. It was something to get a close view of a man whose profession, which he openly acknowledged, condemned him to death, and who according to the expression of the time was a candidate for martyrdom. With Malesherbes Rabaut prepared the way for emancipation. This minister succeeded in gaining over public opinion through a work which he had written by an academician, Rulhières, more celebrated then than he is now, two volumes of Eclaircissements historiques sur les causes de la revocation de vedil'de Nantes, tirés des archives du gouvernement—The councilors Bretignière and Robert de Saint-Vincent had already laid before the parliament of Paris propositions favoring the Protestants. May 23, 1787, an assembly of notables, of which Lafayette was a member, presided over by Count d'Artois(Afterward Charles X), expressed a unanimous wish to restore their civil status to the Protestants. A petition was presented to Louis XVI., by his brother. At length the edict of reinstatement appeared (Nov. 1787). It was far from restoring to the Protestants the rights accorded them by the edict of Nantes, and to France the glory which she had had of being the first to proclaim liberty to conscience. The reformed religion continued to be prohibited; and, according to the terms of the preamble, the law accorded to the Protestants only "that which natural law forbids us to refuse them, the power to prove their births, their marriages and their deaths". The innovation consisted in this, that the officers of justice and their clerks were charged with registering the marriages, births and deaths in the absence of Catholic priests. This concession was an immense benefit; and the edict, incomplete as it was, does honor to the memory of Lafayette, Malesherbes and Louis XVI. The Protestants of France were no longer outside the pale of society. They appeared in crowds to legalize their condition, and in many places three generations of the same family were seen registering their marriages at the same time. The national assembly completed the work of Louis XVI., Aug 23, 1789, by the following decree: "No one shall be disturbed on account of opinions even on religion, provided their manifestation does not disturb the public order established by law." This liberty was at once confirmed, regulated and restrained by the organic law of the first consul (germinal, year X.), which was itself modified and amended by a decree of the president of the republic, dated March 26, 1852.
ATH. COQUEREL, JR.
—Emancipation is not yet complete the world over. It may be considered complete in England and in all the countries inhabited by the Anglo-Saxon race or which are connected with Great Britain, as well as in nearly all Protestant countries. Holland, Prussia, Denmark, Sweden and Norway are almost the only exceptions. In these countries, the Lutheran being the state church, those who are separated from it, whether Catholic or Protestant dissenters are subject to exceptional laws, do not enjoy all the rights of other citizens, and are not admitted to public offices. It is proper to acknowledge that the efforts of government tend to put an end to such an abhorrent state of things, and that the laws voted in 1860 by Sweden show a notable progress.
—In Russia the Protestant population, grouped in compact masses in the Baltic provinces, appears to enjoy as many rights as the orthodox subjects of the czar; still a pressure is exercised to induce, if not to constrain, them to accept the orthodox church.
—In Switzerland, a mixed but a free country, the political emancipation of Protestants is complete even in the Catholic cantons. The cases of mixed marriages, however, still present difficulties of more than one kind, and have caused conflicts between the cantons—Four millions of Austrian Protestants have long been in a difficult and precarious condition, which at one time seemed on the point of becoming more serious on account of the concordat concluded in 1855 between the holy sec and the Vienna government. This act assured a complete preponderance to the Catholic church, with immunities and extensive privileges, created a clerical censorship over publications of every kind, and established ecclesiastical tribunals, which in the case of mixed marriages were able to interfere in a way the most contrary to the rights of Protestants. Happily this concordat was scarcely concluded when it fell into abeyance; if it has never been positively abolished, neither has it ever been completely executed; at present it is almost a dead letter. On the other hand, the imperial patent of Sept. 1, 1859, relating to the Reformed and Lutheran churches in Hungary and its dependent lands, and that of April 10, 1861, concerning Protestants of the rest of the empire, have completed both the civil and religious emancipation of the Austrian Protestants.
—In Italy the civil emancipation of Protestants is also of recent date. Before 1848 only one of the states of the peninsula contained a Protestant population. About 20,000 Waldenses inhabited a few wild valleys of the Alps of Piedmont above Pignerol. Long persecuted, they were at once put in possession of all their civil rights by the French administration, when Napoleon I. united Piedmont to his empire. Since 1814 they have endured an exceptional régime, which closed every liberal career to them and access to public offices. They were finally emancipated in 1848, and given the rights previously refused. At this epoch liberty of conscience existed nowhere else in Italy. The state recognized Protestants only far enough to bring them before tribunals, and there could be no question of civil rights for them. But since the revolutions which have conferred on Italy unity under the government of Victor Emmanuel, in several cities, Milan, Florence, Pisa, Naples, and even in Rome, Protestant communities have been organized, whose members enjoy all the civil and political rights or citizens.
—The situation in Spain is the same. There is a small number of native Protestants in that country, in addition to the congregations composed of foreigners. The law has long condemned their religions meetings and sentenced their members to the severest penalties, but the last revolution put an end to these shameful practices. The constitution of June 1, 1869, which did away with the state religion, declared simply (article 21) that the nation undertakes to maintain the church and the ministers of the Catholic religion, and this constitution establishes, though in indirect terms, the liberty and equality of churches. It guaranteed to strangers (same article) the public or private exercise of their religion, without any limitation but the universal rules of morality and legality, and adds that if a Spaniard professes another religion than the Catholic the preceding rules shall apply to him—Turkey is in advance of Spain. It is known tat in that country each religious community, each nation, Greeks, Armenians, Catholics, govern themselves and administer their own affairs. A considerable number of Armenians (3,000) having embraced Protestantism, found themselves in the most difficult position since 1830. Their former co-religionists rejected them, they were no longer connected with any religion recognized by the state, and were thus without a legal existence, without any rights, without that even of carrying on their occupations. In 1850 an imperial firman put an end to this state of things, and conferred on the Protestant church a legal existence. Since that time the members enjoy all the rights belonging to the other Christian communities of the empire. (See
Return to top
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.