content
stringlengths 275
370k
|
---|
As he often did, Fermat left very few details on how to apply the method. He did provide one example of this method in his proof that the area of a right triangle cannot be equal to a square number. An elegant application of this proof is found in the case of FLT: n=4 where the proof rests on the method of infinite descent and the solution to Pythagorean Triples.
The basic method is very straight forward. One proves that if an assumption is true, it means that the assumption must also be true for an element that is smaller. In other words, an assumption leads to the proposition of an infinite number of cases where it is true.
This technique is especially useful in the domain of positive integers. In this case, infinite descent is impossible since it contradicts the Well Ordering Principle. In other words, there must always be a smallest element in any set of positive integers. But if there is a smallest element, then, there cannot be an infinite descent. In this way, the method can be used as way to prove by contradiction certain negative assumptions. It can also be used to prove a positive conclusion as I will show below.
Fermat is known to have used this technique to prove that there is no square equal to a right triangle. He left a proof using this technique for the case of n = 4 which I will go over in a future blog. He also wrote about its use in the proof for the case of n = 3 in a letter to Christian Huygens.
If Fermat had really found a proof for his theorem, it was without doubt based on this method.
To show the technique in action, I will use it to prove the following theorem:
Thereom: Relatively Prime Divisors of an n-power are themselves n-powers.
This theorem says that if gcd(v,w) = 1 and vw = zn
Then, there exists x,y such that v = xn, w = yn
(1) So, we start with gcd(v,w) = 1, vw = zn
(2) Assume that v is not equal to any number xn
(3) v ≠ 1 since 1 is an xn power
(4) Now, v is divisible by a prime number p. [Fundamental Theorem of Arithmetic]
(5) So, there exists k such that v = pk
(6) p divides z since zn = vw = pkw [By applying Euclid's Lemma]
(7) So, there exists m such that z=pm
(8) So, zn = vw = pkw = (pm)n = pnmn
(9) Dividing p from both sides gives us:
kw = p(n-1)mn
(10) From Euclid's Lemma, p divides k or w.
(11) It can't divide w since it already divides v and gcd(v,w)=1. Therefore, it divides k
(12) We can apply this same argument for each p in p(n-1)
(13) So, we can conclude that p(n-1) divides k.
(14) So, there exists V such that k = p(n-1)*V
(15) So, kw = p(n-1)mn = p(n-1)*V*w
(16) Dividing p(n-1) from both sides gives us:
Vw = mn
(17) Now, gcd(V,w)=1 since V is a divisor of v and gcd(v,w) = 1
(18) Likewise, V cannot be an n-power. If it were, then v = pnV would make v an n-power which goes against our assumption.
(19) Finally, V is less than v since p(n-1) > 1.
(20) Thus, we have a contradiction by infinite descent.
We proved that assuming that a relatively prime divisor of an n-power is not itself an n-power means that there must necessarily be a smaller relatively prime divisor that is also not an n-power and so on and so on.
Here is what Fermat wrote about Infinite Descent in a letter to another mathematician:
"As ordinary methods, such as are found in books, are inadequate
to proving such difficult propositions, I discovered at last
a most singular method...which I called the infinite descent.
At first I used it to prove only negative assertions, such as
'There is no right angled triangle in numbers whose area is a
square'... To apply it to affirmative questions is much harder,
so when I had to prove 'Every prime of the form 4n+1 is a sum
of two squares" I found myself in a sorry plight (en belle
peine). But at last such questions proved amenable to my methods."
-Quoted from Andre Weil's Number Theory |
by Christina Nagy-McKenna, Enerdynamics Instructor
Gas-to-liquids (GTL) is a technology that chemically converts natural gas to a liquid synthetic fuel that can be used in place of diesel and jet fuels. The process is based on work done by scientists in the 1920s and the 1970s. While GTL can also produce other chemical feedstock, the primary commercial interest in this technology is the creation of transportation fuels.
For many years natural gas and crude oil prices moved in tandem with one another. If one went up or down, so did the other. Since 2009 when oil prices rose again after their spectacular rise and dramatic fall in 2008, natural gas prices have remained low. Analysts agree that the two fuels are now decoupled — they now move independently of one another. U.S. gas producers, flush in production from shale gas fields, have spent two years watching natural gas prices drop to levels not seen in a decade.
Although the U.S. imports liquefied natural gas (LNG), producers are now considering exporting LNG to Europe and Asia as gas prices there are much higher. GTL offers another way to monetize natural gas and get a piece of the more lucrative transportation fuels market instead of just a larger piece of the traditional end-user energy market.
How does it work?
GTL takes hydrocarbons such as methane-rich natural gas and converts them into longer-chain hydrocarbons such as diesel fuel. There are two primary processes: Fischer-Tropsch and Mobile. Fischer-Tropsch was developed by two German scientists in the 1920s, while Mobile was developed in the mid-1970s. Fischer begins the process with the partial oxidation of natural gas (methane) into carbon dioxide, carbon monoxide, hydrogen, and water. The ratio of hydrogen to carbon monoxide must be adjusted and later the excess carbon dioxide and excess water are removed. This leaves a synthetic gas (syngas) that is reacted to in an iron or cobalt catalyst. The result is liquid hydrocarbons and other byproducts.
The Mobile process converts the natural gas to a syngas, and then it converts the syngas to methanol. It is then polymerized into alkanes over a zeolite catalyst.
Now that we’ve laid out what GTL is and how it works, next week’s blog post will look at where GTL is becoming a major market player as well as if and how GTL could carve a niche in the U.S. gas market. Stay posted for Part II! |
He pioneered a revolution so strong that it lasts to this day. While Galileo is considered to be the first real scientist, Nicolaus Copernicus, yes, the one of the heliocentric model of the universe, was the one who started it all off!
Copernicus was the first great proponent of the theory that the Earth goes round the Sun and the Sun is the center of the Universe. This hypothesis served to explain why we see retrograde motion of the planets (meaning, sometimes, the planets like Venus seemed to go back and then again start moving forward; today this is known as epicyclic motion). His hypothesis contained little mathematical formalism and mostly comprised logical deductions from a set of seven assumptions about the ‘firmament’, Earth and Sun.
Google celebrates the 540th birthday of the pioneer with a brilliant doodle. The doodle sketches the Copernican universe having 5 bodies and the Earth revolving around a central Sun on circular paths. The moon for the Earth has also been indicated. I hope the speeds are in proper ratio but I haven’t checked that. As is usual, clicking the doodle leads you to search results for Copernicus.
Ideas and acceptance
One might have expected a wildfire-like spread of Copernicus’ idea, since it was almost universally accepted, and also severe backlash from the Church, as the basic tenet of the theory violates the geocentric view of the religious majority. None of the two happened. The idea was slow to catch on and the fact that Copernicus was a Catholic cleric – and that too an active one – helped with the latter part of the above. Copernicus himself suspected a backlash, and thus delayed publishing his ideas in a book.
It strikes me as odd how these ideas, about a quarter of a century before Galileo was accepted largely because it hailed from a Catholic cleric, while Galileo, with no direct links to the gentry, was ostracized and threatened with torture.
Science has since made giant strides. Happy Birthday Nicolaus. |
Perimeter and Area
This tutorial provides comprehensive coverage of introduction to perimeter and area based on Common Core (CCSS) and State Standards and its prerequisites. Students can navigate learning paths based on their level of readiness. Institutional users may customize the scope and sequence to meet curricular needs. This simple tutorial uses appropriate examples to help you understand introduction to perimeter and area in a general and quick way.
This tutorial has been prepared for beginners to help them understand the basics of introduction to perimeter and area. After completing this tutorial, you will find yourself at a moderate level of expertise in introduction to perimeter and area, from where you can advance further.
Before proceeding with this tutorial, you need a basic knowledge of elementary math concepts such number sense, addition, subtraction, multiplication, division, perimeter and area definitions, basic geometric concepts and shapes, writing, evaluating expressions, writing and solving equations and so on. |
The North American colonists may have been living an ocean away, but they were subjects of France, England, or Spain. Any disagreements involving the French, English, Dutch and Spanish in Europe also had an impact on their colonial counter parts. The Native Americans were frequently drawn in as well, taking sides when they felt it was to their advantage.
Review the military engagements experienced by the English colonists up to this point. Which one do you feel was the most important in shaping English colonization and why?
Students often ask why the Native Americans didnt just band together to drive out the colonists. With two other students discuss why you think this didnt happen and the ways in which the Native Americans used the clashes between the Europeans to their advantage.
Compose your work using a word processor and save it, as a Plain Text or an .rtf, to your computer. When youre ready to make your initial posting, please click on the “Create Thread” button and copy/paste the text from your document into the message field. Be sure to check your work and correct any spelling or grammatical errors before you post it. |
What, exactly, is Inclusive Listening?
Language is a two-way street. It takes both a speaker and a listener, to make communication work. Think of it as two wheels on a bike. If one wheel isn’t up to speed, there’s little forward movement.
Inclusive listening is the act of listening, with intentionality, to all the voices of a diverse workforce. It means listening to others in ways that invite participation and collaboration. In a global workforce, where English isn’t everyone’s first language, inclusive listening includes developing the skills to understand the accents of clients and colleagues who speak English as an additional language (aka accent comprehension). Without this ability, a business’s bottom line is compromised. Let’s take a deep dive into what happens when communication is compromised by an inability to understand accents that are different from our own.
The research is clear. Regardless of language, accents make speech more difficult for the listener’s brain to process. This processing difficulty can cause the listener to remember less accurately what the speaker says, make snap judgements, and even doubt the credibility of the speaker. There exists a greater chance of errors and people’s ideas are discounted because of their accents. Companies lose money, time, and opportunities when their employees (or partners) can’t understand one another. Inclusive listening is a business imperative.
Business professionals can learn a methodology to systematically “tune their ear” to unfamiliar accents and acquire a step-by-step approach that minimizes the processing demands required of the brain to comprehend unfamiliar accents. They may hear an accent, but the brain immediately interprets it so the meaning of words are clear. In doing so, fewer miscommunications arise, and business initiatives stay on track.
In addition to learning how to understand global accents, inclusive listening requires mitigating unconscious accent bias. It means removing the blame for difficult communication from the non-native English speaker, and rightly places it on the complexities of the English language.
English pronunciation is hard. It’s not the person, it’s the language. One reason is that English is not a phonetic language. In other words, one letter can have several different pronunciations. In most languages, one letter equals one sound. Not so in English. For example, the letter ‘o’ can be pronounced seven different ways. Try saying the following words aloud: copy, cost, consider, cool, code, could, and coward. One letter, seven pronunciations. Now let’s take a look at the two words “woman” and “women”. The letter ‘o’ is pronounced in two different ways. It’s the two letters, “a” and “e”, that are pronounced the same. Understanding the complexities of English pronunciation creates an appreciation for the challenges people who speak English as an additional language must overcome.
It goes without saying that inclusive listening mandates listening with compassion. There are several techniques for listening, and speaking, in ways that invite people into the conversation rather than push them away. One of these is to eliminate the phrase, “What? What did you say? Can you repeat that?” These questions unintentionally send the message that the person with the nonstandard accent is to blame for the communication disconnect. There’s no cause for blame. This is simply a situation where two people with different accent patterns are trying to communicate with ease. To ask for information without inadvertently casting blame, try replacing the phrases “What?” or “What did you say?” with “I’m sorry. Can you repeat that for me?” This conveys a sense of both parties being in it together.
Inclusive listening skills prevent having to ask people to repeat themselves nearly as often and eliminates the fear in asking when necessary. In a global economy, inclusive listening is a necessary skill set to leverage innovative ideas that come with a diverse talent base. It allows all voices to contribute with impact. Like a bike with both wheels spinning, inclusive listening is a business imperative that allows companies to upskill their talent base and cross the finish line.
Learn more in our inclusive listening workshop, brought to you in partnership with Judy Ravin.
Judy Ravin is the President and co-founder of Accents International. She is an inclusive listening expert enabling teams to overcome language barriers while maintaining each person’s unique cultural identity. Ravin is best known for her two learning and development programs: Powerful Pronunciation® and Inclusive Listening: Tuning Your Ear to Accents®; collectively known as the Ravin Method®. |
We’re all drawn to specific colors; the colors we surround ourselves with reflect our personalities. Let’s explore the different meanings and associations of colors, so you can better understand yourself and those around you also known as Color Psychology.
What is Color Psychology?
Color psychology studies how color affects emotions and behavior. This field is constantly evolving as we know more about the human brain and how it responds to stimuli.
Color has many psychological effects. Chromotherapy is considered an alternative medicine. Chromotherapy is a process that uses color to treat physical and mental illnesses; they also claim the light for different colors can balance the energy in your body and create relief from stress. Other benefits have been reported such as – relieving pain, reducing swelling, healing open wounds quicker, and promoting over all quicker recoveries. One example of chromotherapy that is still used very frequently is blue lights on babies to get rid of jaundice.
Color association is another technique. Colors may drastically influence mood and behavior.
How Colors Affect Moods and Emotions
Colors impact moods and emotions differently. Some colors calm us, whereas others excite us. Colors may improve moods and emotions, according to many studies. For example, blue is often connected with calm and relaxation, while red is frequently associated with energy and excitement.
Other research has shown that colors can also alter our emotions. For example, green is frequently associated with peace and tranquility, while yellow is typically associated with happiness and joy.
What do colors say about our personalities?
Red: Red is a powerful color often associated with passion, energy, and excitement, and it can also be associated with anger, aggression, and danger. People attracted to red are usually outgoing, passionate, and confident, and they can also be impulsive and hot-tempered.
Orange: Orange is a warm and vibrant color often associated with happiness, positivity, and creativity, and it can also be associated with risk-taking and recklessness. People attracted to orange are frequently outgoing, optimistic, and creative.
Yellow: Yellow is a cheerful color often associated with sunshine, optimism, and intelligence. People attracted to yellow tend to be optimistic, intelligent, and happy.
Green: Green is a refreshing and calming color that is frequently associated with nature, growth, and life. People attracted to green tend to be down-to-earth, compassionate, and peaceful.
Blue: Blue is a serene and calming color often associated with water, sky, and intelligence. People attracted to blue tend to be intelligent, introspective, and calm. Blue is also a trustworthy color and typically a great color to use when marketing.
Purple: Purple is a luxurious and romantic color often associated with royalty, wisdom, and magic. People attracted to purple tend to be creative, wise, and magical.
The Psychology of Color in Marketing and Advertising
Advertisers and marketers use color psychology to influence how consumers think about a product or service. They can use color to make a product or service appear more appealing, exciting, or trustworthy.
There is a lot of research on color psychology, and the findings can help businesses create more effective marketing and advertising campaigns. Studies have shown that specific colors can increase brand recognition or make people more optimistic about a product.
We hope you enjoyed learning about what color psychology says about different personality types. Next time you’re making marketing material, try speaking with color.
The Complete Guide to Color Psychology |
BBC Earth newsletter
BBC Earth delivered direct to your inbox
Sign up to receive news, updates and exclusives from BBC Earth and related content from BBC Studios by email.
Satellite images beamed down from outer space have given scientists unprecedented insights into our natural world and the ability to help protect it.
We’ve learnt a lot of things by looking out into space; how stars work, how black holes are formed, how light travels through the Universe. But we can also learn more about our world by looking at Earth from space. Satellite images beamed down from outer space have given scientists unprecedented insights into our natural world and the ability to help protect it.
In the offshore coral reefs that surround many island nations, patterns have been forming. Within the reefs, scores of tiny fish call these coral paradises home. Using the coral structures as protection from larger predators, they often venture to just beyond the reef in search of food - knowing they can quickly swim back to safety. As they feast on the food sources around the coral, they leave behind the sand beneath. These sandy rings around the coral are visible from space - known to scientists as a 'grazing halo'. More than just a patch of empty sand, these halos keep scientists informed about the health of our marine ecosystems. If the rings of sand get too big around the coral it shows that the number of predators has declined, allowing more and more of the smaller fish to safely venture out for food at greater distances. If the grazing halos are too small, or non-existent, it means that there are too many predators and the smaller fish cannot leave the coral to feed. A good-sized halo means the marine environment is in its natural balance.
Far from the coral reefs, in the depths of the Congo jungle, scientists have been monitoring a patch of land. With no trees to be seen in this 500m stretch, it is a stark contrast to the leafy, green jungle that surrounds it. Watching it from space, scientists have been following the elephant families that journey to this cleared patch seeking something they cannot get in the heart of the jungle: minerals. Pools of water in this open area are full of the necessary calcium, potassium and sodium that elephants can’t get from their leafy diet. Coming to this watering holes to drink, is essential for elephant development. However, with this drinking hole exposed, it leaves the elephants vulnerable. Watching from space, scientists can monitor the activities of the elephants at these watering holes and keep a very keen, protective eye out for any poachers.
Away from the leafy green Congo jungle and the clear blue coral reefs, in the hot desert of South Australia, wombats have been leaving a mark that’s visible from space. Keeping an accurate record of animal populations is a difficult task, especially in a vast country like Australia. Due to competition for resources from the introduction of non-native animals, the hairy-nosed wombat population came under attack. To escape the scorching Australian sun, wombats dig underground burrows where they can cool down in temperatures that are 15 degrees lower than on the surface. As their underground burrows leave white spots across the South Australian desert, scientists have been able to use satellite images to watch the wombat population bounce back, with comparative pictures over time have shown more and more burrows appearing.
Our view from space can provide detailed new insights into how our natural world works, how patterns form in our environment, and what we can do to protect it. It’s an exciting way to explore our home, as we’ve never seen it before. |
Convective Available Potential Energy.
The CAPE (Convective Available Potential Energy) index is a meteorological indicator used to assess the potential for convective development and thunderstorm activity in the atmosphere. It measures the amount of energy available for vertical air ascent and the formation of convective clouds.
The CAPE index quantifies the vertical stability of the atmosphere and the temperature difference between the Earth's surface and higher atmospheric layers. Higher CAPE values typically indicate a greater potential for convective updrafts and the development of thunderstorms.
In the context of aviation, the CAPE index is an important factor that pilots and paragliders consider when planning their flights. Higher CAPE values suggest favorable thermal conditions for aircraft and paragliders, indicating a higher likelihood of thermal updrafts and the possibility of longer and higher flights. Conversely, lower CAPE values may indicate reduced potential for thermal activities and limited flying opportunities. |
"Children learn as they play. Most importantly, in play, children learn how to learn.”
Early childhood is the stage where children are curious about everything they see, open-minded and are bundled with new thoughts and ideas. This stage is perfect to trigger their quest for learning and engage them in intriguing activities to stimulate their senses.
The curriculum for Pre-Primary is designed with relevant levels of thinking and inquiry, to create an ardour for learning. It lays emphasis on a child's natural, psychological, physical and social development. Students learn concepts while working on various materials (sand, paper, colours etc.,), instead of the traditional way of instruction.
The focus is on self-expression and learning is imparted through playing, singing, show and tell, drawing, writing, social interaction etc.
English, Mathematics, Environmental Science and Hindi/ Telugu as Second Language (only in PP2) are the subjects taught in Pre-Primary. Worksheets are prepared for various concepts and learning is evaluated in a continuous and in a comprehensive manner. Assessment at this level will be based on the child’s regular interaction in the class, the quality of work produced, levels of general awareness and learning outcomes throughout the year. |
Butterflies are insects in the macrolepidopteran clade Rhopalocera from the order Lepidoptera, which also includes moths. Adultbutterflies have large, often brightly coloured wings , and conspicuous, fluttering flight. The group comprises the large superfamily Papilionoidea, which contains at least one former group, the skippers (formerly the superfamily "Hesperioidea") and the most recent analyses suggest it also contains the moth- butterflies (formerly the superfamily "Hedyloidea"). Butterfly fossils date to the Paleocene, which was about 56 million years ago.
Butterflies have the typical four -stage insect life cycle. Winged adults lay eggs on the food plant on which their larvae, known as caterpillars, will feed. The caterpillars grow, sometimes very rapidly, and when fully developed, pupate in a chrysalis. When metamorphosis is complete, the pupal skin splits, the adult insect climbs out, and after its wings have expanded and dried, it flies off. Some butterflies , especially in the tropics , have several generations in a year, while others have a single generation, and a few in cold locations may take several years to pass through their entire life cycle.
Butterflies are often polymorphic, and many species make use of camouflage , mimicry and aposematism to evade their predators. |
In total, Brazil today registers 1,173 endangered fauna species and another 10 that are considered extinct. Being 159 species of marine animals. The data are in the Red Book of the Brazilian Fauna Threatened by Extinction, which was launched in December by the Chico Mendes Institute for Biodiversity Conservation (ICMBio) during the 13th United Nations Biodiversity Summit (COP 13) held in Cancún, Mexico.
Among the endangered marine animals are:
- Sperm Whale (Physeter macrocephalus), classified as vulnerable;
- Southern Right Whale (Eubalaena australis),
- Whale-sei (Balaenoptera borealis),
- Whale-fin (Balaenoptera physalus) and
- Manatees (Trichechus manatus), classified as In Danger.
Still on the list of endangered aquatic mammals:
- Gray-footed booby (Sotalia guianensis), classified as vulnerable;
- Amazon River Dolphin (Inia geoffrensis), classified as Endangered;
- Porpoise (Pontoporia blainvillei), Critically Endangered.
- The giant of the seas, the blue whale (Balaenoptera musculus), is Critically Endangered. |
“This is an industry that in addition to producing pulp and paper also produces large quantities of wastewater and sludge with a high content of organic substances. It requires a lot of money and effort from the mill to get rid of the wastewater sludges, while it is a large potential source of methane. My research has looked at how we can use that material for biogas production, since this is an important part of the puzzle in the transition to a fossil-free society”, says Eva-Maria Ekstrand.
Several mills in Europe treat their wastewaters for use in biogas production.
“But they deal with the easiest water to treat, mainly at mechanical mills and recycled paper mills. The wastewater streams are more concentrated at these mills, and have low levels of substances that can cause problems in biogas production”, she says.
Kraft methodWood chips at Billerud Korsnäs Photo credit Charlotte Perhammar“We have also been able to demonstrate that the flows from mills that manufacture mechanical pulp have the highest potential for biogas production.”
However, most mills, both in Sweden and around the world, manufacture chemical pulp using the kraft method. Not only does the water in such mills contain chemicals added during cooking and bleaching, it contains also organic substances from the wood, such as resin acids and other substances dissolved during the kraft process.
Eva-Maria Ekstrand decided to concentrate her research on the contents of the fibre sludge. She collected large amounts of fibre sludge from many mills that used different pulping processes. During the initial work, she discovered that the fibre sludge from kraft mills had a high potential for methane production.
The fibre sludge is formed during a primary clarification stage in which the fibres are separated in sedimentation ponds. Eva-Maria Ekstrand’s results show that fibre sludge is easy to digest, but that it lacks many of the nutrients that are required for biogas production.
Active sludge and fibre sludgeAfter the first step, the remaining water is passed to huge aeration ponds, where oxygen and nutrients are added. Over a considerable period, the sludge degrades: larger microorganisms consume the smaller ones, and the volume of sludge falls.
“Somewhat simplified, you could say that the sludge consumes itself. This gives an activated sludge that is difficult to digest, but that contains important nutrients”, she explains.
The next step in the research, therefore, was to mix the activated sludge with fibre sludge. It turned out that the mixture gave a more stable process in the biogas digester, but it was still difficult to break down the organic material in the activated sludge. Thus, a further step was added in which the age of the activated sludge was lowered by adding more wastewater.
“We tested this in a pilot plant, and it became clear that the mill could treat water four times as rapidly, which makes it possible either to reduce the active volume of the aeration pond or increase the production of pulp or paper.”
The results presented in her thesis make it perfectly clear that anaerobic digestion, which proceeds in the absence of oxygen, of the waste streams from the mill is not only possible but also gives huge advantages such as high methane production, higher wastewater treatment capacity and savings in both energy and nutrients.
“The mills no longer need to worry about the increase in the volume of sludge that follows from reducing its age, since we can convert the organic material to biogas. We have Photo credit Anna Nilsenshown that this works well in kraft mills, which previously were considered to be a serious challenge. But the method works in a similar manner in mills that work with mechanical pulping”, she says.
Large amounts of biogasBy reducing the age of the activated sludge and mixing it with fibre sludge in a biogas digester, large amounts of biogas can be produced, while the mill at the same has the opportunity to use its production capacity more efficiently.
“This means in practice that the mill can significantly increase the production of pulp and paper, without having to build more aeration ponds, which require a lot of space and cost a lot of money.”
Part of Eva-Maria Ekstrand’s research has been carried out at the Biogas Research Center, BRC, a national competence centre at Linköping University.
The thesis: Anaerobic digestion in the kraft pulp and paper industry - Challenges and possibilities for implementation, Eva-Maria Ekstrand, Department of Thematic Studies – Environmental Change, Linköping University 2019. Principal supervisors have been Professor Emeritus Bo Svensson and Senior Lecturer Annika Björn. |
It was customary for Ancient Greek mourners to catch their tears in vials and bury them with their loved ones to show the extent of the sorrow for their deaths. Charles Darwin called crying “a special expression of man”, as it is a phenomenon unique to human beings. Scientists, philosophers and psychologists throughout history have speculated extensively on the origins and function of tears, but there is still much that science cannot explain. Are tears a form of vital emotional expression or just the body’s way of turning distress into a bodily function? In order to understand the science of crying we must start at the genesis of tears in the body, the lacrimal glands, commonly known as “tear glands”.
THE SCIENCE OF TEARS
Tears flow through tear ducts in glands located in the upper eyelids. There are three types of tears. All higher animals produce basal tears to regularly lubricate the eyeballs to guard against dust, and protect against infection. The tears we experience when we cut an onion are reflex tears, which are released at the detection of smoke, foreign objects or irritants entering the eye, such as the propenyl sulphuric acid contained in onions. The third type, emotional tears, are a little harder to explain. What is physiologically different about these tears is that they have a distinctive chemical make-up, containing more hormones and protein.
Crying occurs when the tear ducts produce too many tears to be taken back into the nose (where they usually end up) and they brim over and run down our cheeks. We also produce tears when we laugh and yawn, which is due to unnatural facial contortions squeezing our lacrimal glands.
Like many things, we take the ability to cry for granted. Owing to a disorder of the tear glands, people with dry eye usually caused by Sjogren’s syndrome are unable to cry and have to use artificial tears regularly to keep the eyes lubricated. Brain damage to the frontal lobe can cause the opposite problem, pathological crying. It is a disorder of emotional expression in which the part of the brain that regulates the execution of emotional tears in response to a situational stimulus is damaged. The patient is no longer able to associate their feelings of happiness or sadness with the act of crying and will weep uncontrollably for hours at a time. In a bizarre case of crying disorder, a patient researchers call “Eloise”, developed a rare condition called alternating unilateral lachrymation. Each eye would produce tears separately. Reportedly, Eloise cried from one eye when she thought about her mother, and from the other when she thought about her father. If she was prompted to cry whilst thinking about anything other than her parents, she would produce tears from both eyes.
BOYS DON'T CRY
In 18th century Europe men were revered for their sensitivity and would cry openly in public places, especially at the opera. Two centuries later a study by biochemist William Frey reveals unsurprisingly that women cry four times more often than men and for generally longer periods. Results of a survey by psychologist De Fruyt found that women see crying more as a coping mechanism than men do. Women are more likely to cry for both negative and positive reasons while men are more prone to weeping over negative reasons only. According to evolutionary biologists, these differences between the genders in expressions of grief may have an evolutionary basis. This theory is also supported by physiology; men and women’s tear glands are actually structurally different. Boys and girls actually cry equal amounts, but this changes significantly after puberty. Females develop substantial amounts of the hormone prolactin (which controls fluid balance in the body), which is thought to cause weepiness.
WHY WE CRY?
In Leonardo Da Vinci’s drawings of the inner workings of the body he sketched a link between the heart and the tear ducts in the eyes. We cry emotional tears when we feel depressed, angry, happy, frustrated, sentimental and stressed. The reasons underlying this behaviour are uncertain and contentious.
There may be an evolutionary basis for why we cry. Salt-water crocodiles and seals produce tears in order to remove salt from their bodies. Scientist Elaine Morgan believes that the act of crying along with other biological oddities such as hairlessness, supports her contention that some time during human evolution we were sea dwellers, and that at that time our tear ducts were used for that purpose. This theory does not offer why however we respond to certain stimuli when we cry, as animal crying is purely physiological.
Frey believes that the origins of crying are in the hormones. He studied the brains of chronic depressive patients and discovered a build up of manganese, a substance that can also be found in emotional tears. Crying may be therefore just the body’s way of removing stress-causing substances from the body.
Physiologist Darlene Dartt builds on Frey’s findings. Her studies reveal that the nervous system may be responsible for crying. Neurotransmitters send out chemical messages to be received by the tear glands. Working in conjunction with pituitary hormones, Dartt believes that nerves provide the “biological pathway to emotional tears”.
Others have put forward that tears may be a by-product of increased autonomic activity of the brain in stressed individuals. The feeling of relief that 85% of females and 73% of males experience after crying supports this contention. Although these theories explain the physiology of crying, they do little to explain the psychology.
Evolutionary biologist Professor Paul Verrel speculates that the rationale behind crying can be traced back to our infancy. Babies cry for reasons like hunger or pain and later learn to cry for attention. Verrell believes that the principal use of crying when we are infants is for communication, and this can be applied to our adult life. Crying is a universal language. We still cry when we request help, or when we offer help and understanding to someone else.
Why crying then, why not another bodily function like yawning or burping? Through the evolutionary process of ritualisation, an association emerged between the process of weeping and communication. Tears have been naturally associated with the eye watering caused by pain and ocular trauma. Perhaps we cry when we are adults when we do not have the words to express our distress, just like when we were infants. This explanation seems to come close to a resolution, but is fallible. Monkey infants are able to alert the attention of their parents by screaming, without the need to actually shed tears. Also, if tears are purely for communication, what then is behind the practice of crying when we are alone?
THE "GREAT MINDS" ON CRYING
The ancient philosopher Aristotle believed that weeping is a way of cleaning yourself out emotionally. Is one “use” of crying the catharsis of emotions? Freud discredited this assumption, seeing children’s tears as manipulative and adult tears simply regressive. Descartes believed that emotions are purely reactions of the body and completely apart from the workings of the mind. Are we able to stop crying and show our emotions in a different way? Humanist, scientist and self-pronounced tear expert Tom Lutz believes that ceasing to cry is only a matter of learning to feel something else. Through conditioning, the Tiv people of northern Nigeria find other forms of communication than tears. Parents discourage infants from crying by punishing them at the first sight of a tear. On the other hand, some cultures like the Colombian Kogi adults are particularly emotional, as they allow their babies to cry freely for long stretches of time. There is further scientific reason not to link catharsis with crying. People with a tendency towards depression did not cry more often than a control group in a study by Frey.
What science and psychology have yet to explain is why we cry at a moment of pure happiness or profundity. Why do we cry when we watch a sunset or engage with a beautiful work of art? Medieval monks considered these so-called “tears of enlightenment” to be sacred. Tears have intrigued and mystified the great minds all throughout history. Still, crying remains one of those mysteries unquantifiable by scientific or psychological study. It is impossible, for instance, to find a reason why - like every snowflake and fingerprint - the makeup of every teardrop is utterly unique.
From A Crying Shame: The Science of Tears by Heather Corkhill (me!) 2001. |
Suddenly on September 25, 2002, a broad accumulation of ozone briefly overpowered that year’s ozone hole. The press called it “a double ozone hole”, focusing upon the holes and not on the croissant. Observe the location of the ozone croissant and that of the South Magnetic Pole marked by an arrow on this daily NASA survey using the Total Ozone Mapping Spectrometer, TOMS (Fig 3):
Tibetan Plateau Ground-based Observations of Mid-latitude Tropopause Folds Provide Detailed Evidence of Jet Stream Acceleration by Exothermic Oxygen/Ozone Conversion
These detailed ground-based cross sections (Fig 23) provide excellent evidence of ozone converting locally from paramagnetic oxygen at mid-latitudes within tropopause folds. Paramagnetic oxygen in the warm Ferrel Cell on the southern, right side of a cross section meets the cold Polar Cell on the northern, left side, converting to stratospheric ozone (blue color). The tropopause is at the base of the solid blue on the cross sections. The high-angle tropopause boundary within the fold is the locus of an exothermic oxygen /ozone conversion reactionaccelerating a jet stream which flows away perpendicular to the cross section (cyan contours).
Relating Extreme Weather to Wandering Magnetic Poles
Responding to wandering magnetic poles, these stratospheric events (Figs 26 & 27) affect the troposphere in which human lives encounter extreme weather. Compare these satellite maps to human activity on one of those extreme days, February 15, 2015, illustrating how intimately related are humans and the stratosphere (Figs 28, 29, 30 & 31). |
Cretaceous rice reported in an article in Nature Communications, DOI: 10.1038/ncomms1482, 20 September 2011. An international team of scientists studying Coprolites (fossil dung) from the Lameta Formation in India have found they contained phytoliths – microscopic deposits of silica found in the leaves of grasses. Each kind of grass has its own distinctively shaped phytoliths, so scientists can identify a plant from it phytoliths. In this case, the phytoliths along with fossilised fragments of epidermis and cuticles (surface layers) enabled them to identify the fossil plant remains as belonging “to the rice tribe, Oryzeae of grass subfamily Ehrhartoideae”. The Lameta formation is dated as Late Cretaceous – 65-67 million years ago, but grasses were believed to have not evolved until millions of years after this. The researchers concluded: “The new Oryzeae fossils suggest substantial diversification within Ehrhartoideae by the Late Cretaceous, pushing back the time of origin as a whole. These results, therefore, necessitate a re-evaluation of current models for grass evolution and palaeobiogeography”. (Poaceae are grasses.)
Editorial Comment: This study follows the report of phytoliths in dinosaur coprolites from the same rock formation in 2005. At that time the scientists said they would have to revise their understanding of the evolution of grasses and maybe add grass to dinosaur dioramas in museums. (See Nature news 17 November 2005) Furthermore, this new discovery find will not show how rice or any grasses “evolved from simpler plants” because the fossils found i.e. phytoliths and other plant fragments, were able to be identified simply because they looked like phytoliths and epidermis from modern rice plants. This is exactly what you would expect to find if the oldest rice plants are fully formed rice plants that have multiplied after their kind, just as Genesis says. However, the 2005 phytoliths and the scientist’s comments have proved totally ignorable, and sceptics continue to criticise Creation Research for suggesting dinosaurs ate grass and that grasses and grazing animals existed from the beginning, as the Genesis account says (Genesis 1:11). Therefore, we reiterate what we said before: Evolutionists may have to rethink their ideas, but this is another instance where Biblical Creation is a better science predictor than evolution.
Furthermore, we predict that when the evidence is all in, even the fossil record will show that all varieties of plants have existed together from the very beginning. (Ref. Angiosperms. Botany, grains, prediction)
Evidence News 26 October 2011 |
Drought: Unit OverviewEducator Pages »
Why study drought?
Compared with fast and fascinating weather disasters such as hurricanes and tornadoes, drought doesn't get much attention. It's a quieter disasterone that creeps in so inconspicuously that it's not always clear that it has arrived. Despite the fact that it is less obvious than other disasters, its pervasiveness and persistence make it every bit as deadly.
Most regions of the United States experience drought at least occasionally. Depending upon how severe the conditions get and how long they last, drought can devastate crops and forests as well as businesses. When drought occurs, water supplies for agriculture, industry, and personal use decrease, and people in the affected areas need to find ways to cope with the shortage or leave the area.
Completing the labs
All of the labs require browser software and Internet access. The Web pages for each lab contain links to external sites where you'll access data, graphs, or articles. Some labs require laboratory equipment. Several labs also require additional software programs: a spreadsheet program (i.e. Microsoft Excel or Open Office) and Google Earth must be installed and available on the computer you're using in order to complete all of the assigned learning tasks.
Key QuestionsKey Questions addressed by this unit include:
- What is drought?
- What are its causes, symptoms, and impacts?
- Where and when does drought occur?
- How can humans reduce the impacts of drought?
- Can new technologies beat drought? |
A particle is a minute or very small part, usually of matter or a substance (as in 'Forensic examination revealed particles of explosive on the suspect's clothing'), but sometimes of what is immaterial (as in 'I cannot see a particle of difference between the two views'). Particles small enough to be breathed into lungs, contained in smoke, dust, diesel emissions and various industrial processes, contribute seriously to atmospheric pollution: they are also known as particulates.
In Physics a subatomic particle is one of the constituent parts of an atom, while an elementary or fundamental particle is a constituent part of an atom which cannot (yet) be broken down any further. Particle physics is that branch of the subject which studies the nature and behaviour of these particles. (See also Boson.)
In Grammar a particle is a short word or affix which is without meaning in itself and has a function only when used with other words in a sentence. Grammarians differ about the (types of) words which fall into this category, but in the study of English grammar the following types of particle are generally recognised:
- adverbial particles, i.e., the words needed to complete phrasal or multi-word verbs, such as 'to look after' and 'to bring about'. In the sentences 'He looked after his elderly mother' and 'This campaign will bring about a change in public attitudes' 'after' and 'about' may be said to be adverbial particles, though in other contexts, of course, 'after' and 'about' (like other adverbial particles) may function straightforwardly as prepositions.
- the infinitival particle, i.e., in English, the word 'to' which precedes the infinitive in certain constructions - as in 'I wish to make a complaint', 'To travel hopefully is better than to arrive', 'He seems to be asleep'. (In other contexts 'to' may function straightforwardly as a preposition.)
- the negative particle, in English the word 'not', the use of which is the most common way of negating a sentence. ('Not' is always an adverb.) In French, most negatives are expressed by two words: the negative particle ne before the verb, and another particle, pas, or a more specific word such as jamais 'never' or personne 'no-one', after it.
- pragmatic particles or fillers, i.e., words like 'oh' and 'well' and (perhaps) sounds like 'er' and 'hem', which a speaker who hesitates or pauses may use to fill what would otherwise be a gap in the flow of their speech.
- affixes, i.e., prefixes, such as 're-' (indicating repetition, as in 'to retake an examination' or 'to resubmit an application') and 'de-' (indicating down, as in 'to devalue the currency' or 'to depose the king') and suffixes, such as '-ess' (indicating a female, as in 'princess', 'governess', 'heiress', 'tigress', or 'seamstress').
Note that none of the above types of particle can be inflected: some grammarians regard this (i.e., being uninflected) as a further distinguishing characteristic of a particle.
In languages other than English particles may have a variety of other functions.
- In Ancient Greek particles such as γάρ (gar, for) and οὖν (oun, therefore) function as conjunctions, relating a sentence or clause to what has gone before, while others such as γε (ge, at least, at any rate, certainly, indeed) and τοι (toi, let me tell you) emphasise the preceding word, and the pair μέν ... δέ (men ... de) serve to contrast two objects or statements. (Particles in Ancient Greek can never begin a sentence and are almost always the second word in their sentence or clause.)
- In Modern Greek verbal particles (e.g., θά (tha), νά (na),ἄς (as), γιά (yia)), which come immediately before a verb, may signify, e.g., time or mood. Thus θά κλείσω (tha kleiso) is 'I shall close' and ἄς γράφει (as graphei) is 'Let him write'.
- Many languages have interrogative particles. In Japanese, e.g., questions are asked by placing the particle ka at the end of a sentence, while in Arabic one may ask a question by prefixing the particle a to the first word of a sentence. |
Nightjars are largely nocturnal family. They look like owls, with large heads and eyes and a cryptic plumage. The family name caprimulgidae was given to them after some superstitious belief that because of their wide mouths, the birds suckled goats.
In Kenya we have 13 different species of Nightjars, wide spread in different habitats across the country. The photo appearing above was taken at a rocky countryside of Lake Baringo. Most species are nocturnal or active at dusk, and are solitary and retiring . They concentrate their foraging bouts during twilight hours.
By day, they roots on exposed grounds or rocks, in leaf litter, or on branches. When roosting , they adopt a horizontal posture, in contrast to owls.
Nightjars have very large eyes, adopted to low light condition. They eye have a tapetum, a reflective membrane that increases the amount of light entering the eyeball. Its presence causes reflective “eye-shine” when the eye are illuminated by artificial light. |
Java is a widely used programming language expressly designed for use in the distributed environment of the internet. It is the most popular programming language for Android smartphone applications and is also among the most favored for the development of edge devices and the internet of things.
Java was designed to have the look and feel of the C++ programming language, but is simpler to use and enforces an object-oriented programming model. Java can be used to create complete applications that may run on a single computer or be distributed among servers and clients in a network. It can also be used to build a small application module or applet for use as part of a webpage.
Why Java is popular
It is difficult to provide a single reason as to why the Java programming language has become so ubiquitous. However, the language's major characteristics have all played a part in its success, including the following components:
- Programs created in Java offer portability in a network.
Sourcecode is compiled into what Java calls bytecode, which can run anywhere in a network, on a server or on a client that has a Java virtual machine (JVM). The JVM interprets the bytecode into code that will run on computer hardware. In contrast, most programming languages, such as COBOL or C++, will compile code into a binary file. Binary files are platform-specific, so a program written for an Intel-based Windows machine cannot on run a Mac, a Linux-based device or an IBM mainframe. As an alternative to interpreting one bytecode instruction at a time, the JVM includes an optional just-in-time (JIT) compiler which dynamically compiles bytecode into executable code. In many cases, the dynamic JIT compilation is faster than the virtual machine interpretation.
- Java is object-oriented. An object is made up of data as fields or attributes and code as procedures or methods. An object can be a part of a class of objects to inherit code common to the class. Objects can be thought of as "nouns" that a user can relate to "verbs." A method is the object's capabilities or behaviors. Because Java’s design was influenced by C++, Java was mainly built as an object-orientated language. Java also uses an automatic garbage collector to manage object lifecycles. A programmer will create objects, but the automatic garbage collector will recover memory once the object is no longer in use. However, memory leaks may occur when an object which is no longer being used is stored in a container.
- The code is robust. Unlike programs written in C++, Java objects contain no references to data external to themselves or other known objects. This ensures that an instruction cannot include the address of data stored in another application or in the operating system itself, either of which would cause the program and perhaps the operating system to terminate or crash. The JVM makes a number of checks on each object to ensure integrity.
- Data is secure. Unlike C++, Java does not use pointers, which can be unsecured. Data converted to bytecode by Java is also not readable to humans. Additionally, Java will run programs inside a sandbox to prevent changes from unknown sources.
- Applets offer flexibility. In addition to being executed on the client rather than the server, a Java applet has other characteristics designed to make it run fast.
- Developers can learn Java quickly. With syntax similar to C++, Java is relatively easy to learn, especially for those with a background in C.
The three key platforms upon which programmers can develop Java applications are:
- Java SE- Simple, stand-alone applications are developed using Java Standard Edition. Formerly known as J2SE, Java SE provides all of the APIs needed to develop traditional desktop applications.
- Java EE- The Java Enterprise Edition, formerly known as J2EE, provides the ability to create server-side components that can respond to a web-based request-response cycle. This arrangement allows the creation of Java programs that can interact with Internet-based clients, including web browsers, CORBA-based clients and even REST- and SOAP-based web services.
- Java ME- Java also provides a lightweight platform for mobile development known as Java Micro Edition, formerly known as J2ME. Java ME has proved a prevalent platform for embedded device development, but it struggled to gain traction in the smartphone development arena.
Learn how to install Java and the JDK
Main uses of Java
It is easy for developers to write programs which employ popular software design patterns and best practices using the various components found in Java EE. For example, frameworks such as Struts and JavaServer Faces all use a Java servlet to implement the front controller design pattern for centralizing requests.
A big part of the Java ecosystem is the large variety of open source and community built projects, software platforms
Java EE environments can be used in the cloud as well. Developers can build, deploy, debug and monitor Java applications on Google Cloud at a scalable level.
In terms of mobile development, Java is commonly used as the programming language for Android applications. Java tends to be preferred by Android developers because of Java’s security, object-oriented paradigms, regularly updated and maintained feature sets, use of JVM and frameworks for networking, IO and threading.
Although Java is widely used, it still has fair criticisms. Java syntax is often criticized for being too verbose. In response, several peripheral languages have emerged to address these issues, including Groovy. Due to the way Java references objects internally, complex and concurrent list-based operations slow the JVM. The Scala language addresses many of the shortcomings of the Java language that reduce its ability to scale.
History of Java
The internet and the World Wide Web were starting to emerge in 1996 and Java was not originally designed with the internet in mind. Instead, Sun Microsystems engineers envisioned small, appliance-sized, interconnected devices that could communicate with each other.
As a result, the Java programming language paid more attention to the task of network programming than other competing languages. Through the java.net APIs, the Java programming language took large strides in simplifying the traditionally difficult task of programming across a network.
The first full increment of Java occurred on Jan. 23, 1996. The well-known JavaBeans interface was introduced in Java 1.1 in February 1997.
Later versions of Java releases have received nicknames, such as JDK 1.2 being referred to as Java 2. Java 2 saw considerable improvements to API collections, while Java 5 included significant changes to Java syntax through a new feature called Generics.
In October 2009, Google released the Android software developer's kit (SDK), a standard development kit that made it possible for mobile device developers to write applications for Android-based devices using Java APIs.
Oracle Corp. took over the Java platform when it acquired Sun Microsystems in January 2010. The acquisition delayed the release of Java 7, and Oracle scaled back some of the more ambitious plans for it.
Java 8 was released in March 2014. It included Lambda expressions, which are common features in many competing languages but had been absent in Java. With Lambda expressions, developers can write applications using a functional approach, as opposed to an object-oriented one.
March of 2018 saw the release of Java 10 followed by Java 11 in September 2018. Java 12 was released in March of 2019.
Oracle vs. Google lawsuit: Java and Android
On Aug. 10, 2010, Oracle launched the first of two lawsuits against Google, the second of which sought $8.8 billion in damage over the use of the Java programming language in the Android SDK.
Oracle alleged copyright infringement and that Google's implementation of various Java APIs used code copied directly from Oracle's implementation. The litigation ended in May 2016 as both trials found in favor of Google. Jurors decided that Android's use of the Java APIs constituted fair use and awarded no damages to Oracle.
As of 2016, more than half of all handheld phones in the world run on Android, giving Java an incredibly |
3 The Big PictureWe can answer much more interesting questions about variables when we compare distributions for different groups.Below is a histogram of the Average Wind Speed for every day in 1989.
4 The Big Picture (cont.)The distribution is unimodal and skewed to the right.The high value may be an outlierComparing distributions can be much more interesting than just describing a single distribution.
5 The Five-Number Summary The five-number summary of a distribution reports its median, quartiles, and extremes (maximum and minimum).Example: The five-number summary for the daily wind speed is:Max8.67Q32.93Median1.90Q11.15Min0.20
6 The Five-Number Summary Consists of the minimum value, Q1, the median, Q3, and the maximum value, listed in that order.Offers a reasonably complete description of the center and spread.Calculate on the TI-83/84 using 1-Var Stats.Used to construct the Boxplot.Example: Five-Number Summary1: 20, 27, 34, 50, 862: 5, 10, 18.5, 29, 33
7 Daily Wind Speed: Making Boxplots A boxplot is a graphical display of the five-number summary.Boxplots are useful when comparing groups.Boxplots are particularly good at pointing out outliers.
8 Boxplot A graph of the Five-Number Summary. Can be drawn either horizontally or vertically.The box represents the IQR (middle 50%) of the data.Show less detail than histograms or stemplots, they are best used for side-by-side comparison of more than one distribution.
9 Constructing a Boxplot Draw a scale below the boxplot and label.Draw a vertical line above the value of Q1, this forms the left end of the box.Draw a vertical line above the value of Q3, this forms the right end of the box.Draw a vertical line above the value of the median and complete the box.Extend the “left whisker” to the minimum value.Extend the “right whisker” to the maximum value.Give a descriptive title to the graph.
11 What About Outliers?Recall that an outlier is an extremely small or extremely large data value when compared with the rest of the data values.What should we do about outliers?Try to understand them in the context of the data.Data errorSpecial nature to the data
12 OUTLIERSIf there are any clear outliers and you are reporting the mean and standard deviation, report them with the outliers present and with the outliers removed. The differences may be quite revealing.Note: The median and IQR are not likely to be affected by the outliers.The following procedure allows us to check whether a data value can be considered as an outlier.
13 Testing for OutliersIQR is used to determine if extreme values are actually outliersAn observation is an outlier if it falls more than 1.5 times IQR below Q1 or above Q3.To test for outliersConstruct an upper and lower fenceUpper Fence = Q3 + (1.5)IQRLower Fence = Q1 – (1.5)IQRIf an observation falls outside the fences (ie. Greater than the upper fence or less than the lower fence) than it is an outlier.
14 More OutliersFar Outlier – Data values farther than 3 IQRs from the quartiles.
16 Example 1: Odd number data set Data: 20, 25, 25, 27, 28, 31, 33, 34, 36, 37, 44, 50, 59, 85, 86 Find Q1, M, Q3, IQR and any outliers.Sort dataQ Q3lower half median upper halfIQR = 50 – 27 = 23Upper Fence = Q3 + (1.5)IQR = = 84.5Lower Fence = Q1 – (1.5)IQR = 27 – 34.5 = -7.5Outliers 85 and 86 (greater than the upper fence)
17 Example 2: Even number data set Find Q1, M, Q3, IQR and outliers.Q1 Q3lower half upper halfmedianIQR = 29 – 10 = 19Upper Fence = 29 + (1.5)IQR = = 57.5Lower Fence = 10 – (1.5)IQR = 10 – 28.5 = -18.5No Outliers
18 Your Turn: Calculate Outliers The data below represent the 20 countries with the largest number of total Olympic medals, including the United States, which had 101 medals for the 1996 Atlanta games. Determine whether the number of medals won by the United States is an outlier relative to the numbers for the other countries.Data values – 63, 65, 50, 37, 35, 41, 25, 23, 27, 21, 17, 17, 20, 19, 22, 15, 15, 15, 15, 101.
19 SolutionThe IQR = 39 – 17 = 22.Lower Fence, Q1 – 1.5IQR = 17 – (1.522) = -16.And Upper Fence, Q IQR = 39 + (1.522) = 72.Since, 101 > 72, the value of 101 is an outlier relative to the rest of the values in the data set.That is, the number of medals won by the United States is an outlier relative to the numbers won by the other 19 countries for the 1996 Atlanta Olympic Games.
20 Solution (cont.)Pictorial Representation for the OUTLIER of the Number of Olympic Medals Won by the United States in 1996 Atlanta Games.OUTLIER101LowerFence-16UpperFence+72
21 Modified BoxplotPlots outliers as isolated points, where regular boxplots conceal outliers.From now on when we say “boxplot”, we mean “modified boxplot”. The modified boxplot is more useful than the boxplot.Constructing a Modified Boxplot.Same as a boxplot with the exception of the “whiskers”.Extend the “left whisker” to the minimum value if there are no outliers or to the last data value less than or equal to the lower fence if there are outliers.Extend the “right whisker” to the maximum value if there are no outliers or to the last data value less than or equal to the upper fence.Outliers (either low or high) are then represented by a dot or an asterisk.
22 Example: Constructing Boxplots Draw a single vertical axis spanning the range of the data. Draw short horizontal lines at the lower and upper quartiles and at the median. Then connect them with vertical lines to form a box.
23 Example: Constructing Boxplots (cont.) Erect “fences” around the main part of the data.The upper fence is 1.5 IQRs above the upper quartile.The lower fence is 1.5 IQRs below the lower quartile.Note: the fences only help with constructing the boxplot and should not appear in the final display.
24 Constructing Boxplots (cont.) Use the fences to grow “whiskers.”Draw lines from the ends of the box up and down to the most extreme data values found within the fences.If a data value falls outside one of the fences, we do not connect it with a whisker.
25 Constructing Boxplots (cont.) Add the outliers by displaying any data values beyond the fences with special symbols.We often use a different symbol for “far outliers” that are farther than 3 IQRs from the quartiles.
27 Information That Can Be Obtained From a Box Plot Skewed LeftSkewed Right
28 Information That Can Be Obtained From a Box Plot – Looking at the Median If the median is close to the center of the box, the distribution of the data values will be approximately symmetrical.If the median is to the left of the center of the box, the distribution of the data values will be Skewed Right.If the median is to the right of the center of the box, the distribution of the data values will be Skewed Left.
29 Information That Can Be Obtained From a Box Plot – Looking at the Length of the Whiskers If the whiskers are approximately the same length, the distribution of the data values will be approximately symmetrical.If the right whisker is longer than the left whisker, the distribution of the data values will be Skewed Right.If the left whisker is longer than the right whisker, the distribution of the data values will be Skewed Left.
33 Comparing Distributions Compare the histogram and boxplot for daily wind speeds:How does each display represent the distribution?The shape of a distribution is not always evident in a boxplot.Boxplots are particularly good at pointing out outliers.
34 Comparing GroupsIt is almost always more interesting to compare groups.With histograms, note the shapes, centers, and spreads of the two distributions.When using histograms to compare data sets make sure to use the same scale for both sets of data.What does this graphical display tell you?
35 Comparing GroupsThe shapes, centers, and spreads of these two distributions are strikingly different.During spring and summer (histogram on the left), the distribution is skewed to the right. A typical day has an average wind speed of only 1 to 2 mph.In the colder months (histogram on the right), the shape is less strongly skewed and more spread out. The typical wind speed is higher, and days with average wind speeds above 3 mph are not unusual.
36 Comparing Groups (cont.) Boxplots offer an ideal balance of information and simplicity, hiding the details while displaying the overall summary information.We often plot them side by side for groups or categories we wish to compare.What do these boxplots tell you?
37 Comparing Groups (cont.) By placing the boxplots side by side, we can easily see which groups have higher medians, which have the greater IQRs, where the middle 50% of the data is located in each group, and which have the greater overall rangeWhen the boxes are placed in order, we can get a general idea of patterns in both the centers and the spreads.Equally important, we can see past any outliers in making these comparisons because they’ve been displayed separately.
38 Re-expressing Skewed Data to Improve Symmetry When the data are skewed it can be hard to summarize them simply with a center and spread, and hard to decide whether the most extreme values are outliers or just part of a stretched out tail.How can we say anything useful about such data?The secret is to re-express the data by applying a simple function (logarithms, square roots, and reciprocals) to each value.
39 Re-expressing Skewed Data to Improve Symmetry (cont.) One way to make a skewed distribution more symmetric is to re-express or transform the data by applying a simple function (e.g., logarithmic function).Note the change in skewness from the raw data (top) to the transformed data (right):
40 What Can Go Wrong? Beware of outliers Be careful when comparing groups that have very different spreads.Consider these side-by-side boxplots of cotinine levels:Re-express . . .
41 What have we learned?We’ve learned the value of comparing data groups and looking for patterns among groups and over time.We’ve seen that boxplots are very effective for comparing groups graphically.We’ve experienced the value of identifying and investigating outliers. |
By Kevin Matyi
The motion of the moon is what causes eclipses, but the dramatic change in sunlight is what makes them so impressive to observers. But what exactly is happening when the moon passes in front of the sun?
The moon is blocking the sun’s light from reaching Earth, but there is more to the situation than just that. Their relative distance to Earth is one of the most important factors.
The sun is about 400 times farther from Earth than the moon and has a diameter about 400 times larger than the moon. As a result, both the sun and moon (near perigee) appear to be the same size in the sky, allowing the moon to perfectly block out the sun and cast a shadow on Earth during a total eclipse.
The shadow we see while in the path of totality is called the umbra, and the shadow of the surrounding partial eclipse is a penumbra. The shadow from an annular eclipse (when the moon appears smaller than the sun during an eclipse, and so a ring of light is visible around it) is called an anteumbra.
The physics of how each type of shadow is formed is difficult to explain but easy to visualize, so before I tell you about them, here is a picture (technically a ray diagram) of what happens during an eclipse:
For a total eclipse, the moon has to block out all of the sun’s light. To put the moon in the best position, imagine that a person on Earth is standing under the exact middle of the moon, the centerline of a total solar eclipse.
In this case, light coming from the middle of the sun is clearly going to be blocked by the moon, since it is directly in the way and visible light cannot penetrate rock. The most difficult light to block will be coming from the top and bottom of the sun.
To figure out whether the light will be blocked, a bit of drawing can help. If the light is coming from the exact bottom of the sun and you are wondering if a person can see the light while under the exact center of the moon, draw a line between where the light starts and the person’s eyes.
Does the moon get in the way of the line? If yes, then the person is experiencing a total solar eclipse. None of the sun’s light can get past the moon, so the sun is fully blocked.
If the answer is no, but the person is still standing under the center of the moon, then they are in an annular eclipse. The moon is in the perfect position to block all of the sun’s light, but it still fails to do so. In this case, it will appear to be a large black circle with a ring of sunlight called an annulus around it.
A partial eclipse is the most difficult to explain, since it has the most variability. All but a sliver of the sun may be blocked, or the moon can barely cover any of the sun. In general though, a partial solar eclipse happens when the moon is not quite directly between the observer and sun, but is still in the way of some sunlight.
You can use the same process for determining whether a person is experiencing a total solar eclipse to figure out if they are in the penumbral shadow of the moon. A slight complication is that the moon is off center, so it matters more where the origin point of the light is.
If the person is standing a little north of the moon’s center, then the line from origin to person should start from the sun’s southernmost point, the bottom, since the northern light is less likely to be blocked due to the moon being a bit more to the south from the person’s perspective.
If any of the sun’s light is blocked by the moon, then the person is experiencing a partial solar eclipse. The limit of this blockage, where only the slightest amount of sunlight is blocked, is the edge of the penumbra shadow.
If the moon is not blocking any light, then the moon may be close to the sun but there is no eclipse happening on that spot of Earth. |
Innate immunology focuses on the immune system’s nonspecific defenses. These defenses include chemicals that occur in the blood, physical barriers and cells that are created to attack foreign substances. The study of this area helps researchers determine how quickly the body responds to antigen exposures.
The innate and adaptive immune systems work toward the same goal of protecting the body from foreign substances, but how they accomplish this goal differs. The innate immune system is ready to defend from the moment that exposure occurs, but the adaptive immune system takes more time to respond — it has to adapt. Innate immunology has shown that it is the adaptive immune system that defends against the specific antigen that triggered the response; the innate immune system is nonspecific because it reacts to all antigens equally. The innate immune system does not remember antigens to which it has been previously exposed.
Although innate immunology studies the innate immune system, it must also study how the innate and adaptive systems work together. There are many components of one that influence the components of the other. For example, if the adaptive immune system discovers an organism that it previously encountered, it will work more rapidly to destroy it without taking as much time to adapt, but if there is an additional organism or infection that is not recognized, the innate immune system will already be defending when the adaptive system kicks in. At the same time, when the innate immune system creates a cellular defense, the adaptive system responds accordingly. The innate system reacts first, however, because the adaptive system needs more time to recognize and respond.
Through innate immunology, research has shown that there are several elements of the innate system. Anatomical barriers are the first defenses and are physical, like the skin. If the anatomical barriers are damaged, infection starts; defenses include inflammation. Macrophages and natural killer cells provide part of the cellular barriers.
By learning about how the immune system works to fight infection, researchers contribute important biomedical information. With the details learned about the innate system, it is also learned how the body will respond to specific organisms. This helps determine how effective medications can be and when they are needed to boost the body’s natural responses. |
Biomass energy is a growing source of energy in the United States and other countries around the world. It can be produced from many types of organic matter and the product can be used to provide a cleaner alternative to traditional electricity and transportation fuel sources. However, there are also a range of disadvantages associated with biomass energy.
What is Biomass Energy?
Biomass energy is a relatively clean, renewable energy source involving the use of organic matter which collected energy from the Sun and converted it into chemical energy when it was alive. It is a renewable source as this matter is continually growing and absorbing the Sun’s energy, particularly where biomass crops are farmed. Most biomass energy is sourced from plants which have gathered energy from the Sun through the process of photosynthesis. This form of energy has been used by humans for thousands of years, since humans began to burn wood for heat. Advancements in technology have allowed biomass energy to be used in a wide variety of applications, including liquids and gases used for biofuels to power transport.
One of the major advantages of biomass energy is that it produces a smaller amount of harmful greenhouse gases than fossil fuel alternatives produce. Biomass energy produces less carbon than fossil fuel energy. Levels of the greenhouse gases methane and carbon dioxide could also be reduced through the use of biomass energy sources as these gases are produced by organic matter if left to decay without being used for a purpose such as this.
Another environmental benefit of biomass energy is that it produces lower levels of sulfur dioxide which is a major component of acid rain. Biomass energy is easily sustainable if crops are farmed and managed effectively and is available wherever plants can be grown. One further advantage of biomass energy is that it can be used for a range of different purposes, including heat production, fuel for cars and the production of electricity.
One of the disadvantages of biomass energy is the amount of space that it requires. A great deal of land and water are needed for some biomass crops to be produced and, when they have grown, the product requires a large amount of storage room before being converted into energy. Another disadvantage is that biomass energy is not entirely clean. Some greenhouse gases are still produced; although the levels of these gases are far less than those produced by fossil fuels.
One other disadvantage of biomass fuel production is that it is quite expensive, with costs including paying for the large amount of labor involved and transportation costs as this type of energy must be produced close to where the source is obtained.
The main uses of biomass energy today are for producing electricity through driving turbines and providing biofuel for transportation such as biodiesel and ethanol. Although there are some disadvantages to using biomass energy, the benefits outweigh them when compared to other energy sources such as fossil fuels. It is for this reason than countries around the world are developing programs to increase the production of biomass energy. |
New method of determining geographic origin of humans
Leiden researchers have developed a new method of determining the geographic origin of humans. Archaeologist Jason Laffoon and his team used the technique to discover where precolonial pioneers in the Caribbean region came from.
When we research human migration in the past, it is a challenge to pinpoint where exactly different individuals or groups came from. DNA studies are widely used methods for research on human remains, but in terms of geographic origins, these genetic studies provide more information about the ancestors of the individual found, according to Dr Laffoon. ‘The DNA cannot tell us about an individual's personal origin, where he or she spent their childhood,' he explains.
Isotope research is increasingly being used to determine the origin of archaeological (or forensic) remains. Based on radioactive isotopes in the human remains, it is possible to distinguish between locals and immigrants. 'Although we can trace which individuals are immigrants in the area, we still don't know where they came from.'
Laffoon and his team combined isotope research with GIS (Geographic Information Systems) and statistical analyses to develop a new method that makes it possible to determine where the individual originally came from. Laffoon: ‘We can combine this with biochemical analysis of human remains, such as teeth, so that we are able to determine the geographic origin of individuals.'
The team tested this new method on the tooth of a modern person of known origin - from Caracas, Venezuela - and the results showed a very good match with this location. The team then tested the model on two archaeological teeth from different sites in the Caribbean, of which the origin was suspected. While less precise than with the modern tooth, the results of these tests also indicated specific regions as the places of origin.
Archaeology and forensic research
The new method earned Laffoon's team a publication in the scientific journal PLOS ONE. The team is now working on further validating the method. 'The technique has the potential to strongly improve the precision and accuracy with which the origins can be determined,' says Laffoon. 'This could have very important implications not only for archaeological studies of migrations but also for forensic research into, for example, the identification of human remains in crime investigations.' |
Today's lesson will revolve around the cross product of two vectors and its geometric implications. Like yesterday's dot product lesson, we are learning tools that will allow us to find the equation of a plane.
It can be difficult to make a lesson like this meaningful to students since we are learning a complicated formula and not giving an application until tomorrow. In my Selling a Lesson reflection, I talk about how I combat this concept that may seem unnecessary to students.
I begin the lesson by explaining that we are going to be learning a new operation called the cross product. I say that it is an operation just like yesterday's dot product, but that the answer has a geometric meaning. Here are some introductory points that I will make:
Once we get some of the introductory information taken care of, we can dive right in to the cross product formula. The YouTube video below is a nice resource to show your students to build upon yesterday's work with the dot product. The derivation of the cross product formula is lengthy, and I would never expect my students to be able to replicate it, but exposing them to some of the pertinent ideas will make it more meaningful to them. Thus, you may choose to show all or some of this video, but at least students will loosely understand where it comes from.
After the video we will go through an example together of finding the cross product of two vectors. The method I use involves matrices and cofactors. I have students set up a matrix like this and then make new 2x2 matrices and find the determinant of each to find the i, j, and k coefficients of the cross product vector. I instruct students that to find the 2x2 matrix for the i coefficient, you delete the i row and column of the 3x3 matrix and then take the determinant of this new 2x2 matrix. The process is repeated for the j and k coefficients. Furthermore, the coefficients of the i, j, and k terms will alternate positive and negative signs.
After going through one example together, students are usually in a good place to try finding the cross product on their own. I will put two vectors u and v on the board and have one half of the class find u x v and the other half find v x u. After working, students will share their answers and we will see that something has happened - the coefficients of the cross product vectors are opposites. Thus the cross product operation is not commutative. I will ask students to think about this and usually a student will realize that the new vectors are both orthogonal to u and v, but they are going in different directions.
To end this lesson, I will ask the following questions to recap the big ideas of what we learned today.
After these concluding questions, I will assign some problems from our textbook as homework. |
Assessing student learning, including varied, valid, and reliable metrics, instruments, and practices.
Assessment is important because it can demonstrate whether districts are meeting their educational goals, and its results can affect decisions about student advancement, instructional needs, and funding. While standardized testing has dominated the assessment conversation in the recent past, districts are now looking beyond tests towards measures aligned with their goals of boosting student engagement and 21st century skill development. At the classroom level, teachers are rethinking grading, using formative assessment and portfolios to invite students to set and monitor their own learning goals. However, it can be difficult to ensure that these new instruments and practices are valid and reliable. Explore the challenges related to Assessment -- 21st Century Skills Assessment, Measuring Student Engagement, Formative Assessment, and Grading -- below.
21st Century Skills Assessment
Many districts are looking beyond statewide assessments to broader definitions and measures of student achievement. They are focusing on boosting “21st century skills” such as critical thinking, problem solving, communication, collaboration, and creativity. But these skills do not yet have widely adopted tools for assessment. In this context, how can districts support students and recognize progress as they develop 21st century skills?
Many teachers are working to implement formative assessments to gauge student learning in real-time, and to help students set and monitor their own learning goals using data. While data collected during everyday learning — through student-driven projects, performance tasks, and digital learning tools — can provide valuable insights, these strategies and tools are in various stages of implementation. How can districts support educators to use formative assessment to improve student learning using evidence?
Teachers, students, and parents agree that grading can be subjective, and that grades may not always reflect student mastery of skills and content. Many schools are also thinking about how grades can provide more purposeful feedback to help students improve. How can schools shift from letter grades toward a more objective system of grading? |
Morphological structure of the english word
If we describe a w o r d as an autonomous unit of language in which a particular meaning is associated with a particular sound complex and which is capable of a particular grammatical employment and able to form a sentence by itself , we have the possibility to distinguish it from the other fundamental language unit, namely, the morpheme.
A morpheme is also an association of a given meaning with a given sound pattern. But unlike a word it is not autonomous. Morphemes occur in speech only as constituent parts of words, not independently, although a word may consist of a single morpheme. Nor are they divisible into smaller meaningful units. That is why the morpheme may be defined as the minimum meaningful language unit.
According to the role they play in constructing words, morphemes are subdivided into roots and affixes. The latter are further subdivided, according to their position, into prefixes, suffixes and infixes, and according to their function and meaning, into derivational and functional affixes, the latter also called endings or outer formative s.
When a derivational or functional affix is stripped from the word, what remains is a stem (or astern base). The stem expresses the lexical and the part of speech meaning. For the word hearty and for the paradigm heart (sing.) – hearts (pi.)1 the stem may be represented as heart-. This stem is a single morpheme, it contains nothing but the root, so it is a simple stem. It is also a free stem because it is homonymous to the word heart.
A stem may also be defined as the part of the word that remains unchanged throughout its paradigm. The stem of the paradigm hearty – heartier – (the) heartiest is hearty-. It is a free stem, but as it consists of a root morpheme and an affix, it is not simple but derived. Thus, a stem containing one or more affixes is a derived stem. If after deducing the affix the remaining stem is not homonymous to a separate word of the same root, we call it abound stem. Thus, in the word cordial ‘proceeding as if from the heart’, the adjective-forming suffix can be separated on the analogy with such words as bronchial, radial, social. The remaining stem, however, cannot form a separate word by itself, it is bound. In cordially and cordiality, on the other hand, the derived stems are free.
Bound stems are especially characteristic of loan words. The point may be illustrated by the following French borrowings: arrogance, charity, courage, coward, distort, involve, notion, legible and tolerable, to give but a few.2 After the affixes of these words are taken away the remaining elements are: arrog-, char-, com-, cow-, -tort, -volve, not-, leg-, toler-, which do not coincide with any semantically related independent
Roots are main morphemic vehicles of a given idea in a given language at a given stage of its development. A root may be also regarded as the ultimate constituent element which remains after the removal of all functional and derivational affixes and does not admit any further analysis. It is the common element of words within a w o r d-f a m i 1 y. Thus, -heart- is the common root of the following series of words: heart, hearten, dishearten, heartily, heartless, hearty, heartiness, sweetheart, heart-broken, kind-hearted, whole-heartedly, etc. In some of these, as, for example, in hearten, there is only one root; in others the root -heart is combined with some other root, thus forming a compound like sweetheart.
We shall now present the different types of morphemes starting with the root. It will at once be noticed that the root in English is very often homonymous with the word.
A suffix is a derivational morpheme following the stem and forming a new derivative in a different part of speech or a different word class, c f. -en, -y, -less in hearten, hearty, heartless. When both the underlying and the resultant forms belong to the same part of speech, the suffix serves to differentiate between lexico-grammatical classes by rendering some very general lexico-grammatical meaning. For instance, both -ify and -er are verb suffixes, but the first characterizes causative verbs, such as horrify, purify, rarefy, simplify, whereas the second is mostly typical of frequentative verbs: flicker, shimmer, .twitter and the like.
A prefix is a derivational morpheme standing before the root and modifying meaning, c f. hearten – dishearten. It is only with verbs and statives that a prefix may serve to distinguish one part of speech from another, like in earth n -unearth v, sleep n – asleep (stative). It is interesting that as a prefix en- may carry the same meaning of being or bringing into a certain state as the suffix -en, c f. enable, encamp, endanger, endear, enslave and fasten, darken, deepen, lengthen, strengthen. |
A planter, Reconstruction-era politician, Republican civil servant, and important historian, John Roy Lynch was born on 10 September 1847 on Tacony plantation, near the town of Vidalia, Louisiana, in Concordia Parish. The biracial progeny of plantation manager Patrick Lynch, an Irish immigrant, and slave Catherine White, Lynch followed his mother’s status into slavery. While saving to buy the family, his father died and left them enslaved. Later sold across the Mississippi River to Natchez, Lynch finally gained freedom after Union troops occupied the city in 1863. Lynch remained in Natchez and worked as a photographer during the day and attended school at night.
In 1869 Gov. Adelbert Ames appointed Lynch to serve as a justice of the peace. Later that year he was elected to the Mississippi House of Representatives, where his intellect and oratorical skill apparently impressed both black and white colleagues. His legislative record led not only to his reelection but also to his 1872 selection as Speaker of the House.
In 1872 Lynch won a seat in the US House of Representatives, and he was reelected two years later. He lost the seat in 1876 but he returned to Congress for almost a year after contesting Gen. James R. Chalmers’s election in 1882. Lynch again failed to win reelection in 1884 and retired to his plantation in Adams County. On 18 December 1884 he married Mobile native Ella Wickham Somerville.
Although he considered himself a planter, Lynch continued to study law and engage in politics. From 1883 to 1889 he served the Republicans in several key state and national positions, ultimately receiving a federal appointment from Pres. Benjamin Harrison to serve as an auditor in the Navy Department, a post Lynch held from 1889 to 1893. He briefly returned to Mississippi and gained admittance to the state bar in 1896. He practiced law in Washington, D.C., from 1897 to 1898, when Pres. William McKinley appointed him to serve as a US Army paymaster during the Spanish-American War. Lynch divorced his wife in 1900 and remained in the army, attaining the rank of major and spending three years in Cuba before moving on to postings in San Francisco, Hawaii, and the Philippines.
He retired from the army in 1911, married Cora Williamson, and moved to Chicago, where he reestablished his legal practice and launched his writing career. Having experienced Reconstruction firsthand, Lynch was offended by the scholarship written under the direction of William Archibald Dunning, which was sympathetic to white southerners and portrayed Reconstruction as an era of Republican corruption, former slaves’ barbarity, and federal vindictiveness. In 1913 he published Facts of Reconstruction, an alternative to the Dunning School and an inspiration to later revisionist historians. Lynch further challenged scholarly consensus in Reminiscences of an Active Life: The Autobiography of John Roy Lynch and Some Historical Errors of James Ford Rhodes. Lynch died in Chicago on 2 November 1939 and was interred at Arlington National Cemetery.
- Biographical Directory of the United States Congress (1950)
- W. E. B. Du Bois, Black Reconstruction in America, 1860–1880 (1935)
- John Roy Lynch, The Facts of Reconstruction (1913)
- John Roy Lynch, Reminiscences of an Active Life: The Autobiography of John Roy Lynch, ed. John Hope Franklin (1969)
- John Roy Lynch, Some Historical Errors of James Ford Rhodes (1922)
- US House of Representatives, History, Art, and Archives website, history.house.gov
- Vernon Lane Wharton, The Negro in Mississippi, 1865–1890 (1947) |
Even more revolutionary than America’s Declaration of Independence was its declaration of governance. It is easily overlooked that while Americans proclaimed independence on July 4, 1776, they had to wait for the Constitution to govern it. Although lacking the former document’s lightning effect, it was the latter that has assured America’s lasting impact.
It is always thus: The first, regardless how notable its successors, holds an unassailable advantage. It is no less true with America’s great documents. “When in the course of human events…” “We hold these truths to be self-evident…” In a less relativistic age, these phrases were hallowed — rather than deconstructed — into every school child’s heart.
Sadly, if the words are not reverenced as formerly, the event they announced still holds pride of patriotic place in America. Independence Day remains the American holiday. The document and event are indelible. As undoubtedly important as this document was, and remains, to America, it is important to remember its less celebrated, but even more important, successor.
Why We Preference the Declaration
The Constitution does not get its own day for several reasons. For one, it has no single day of origin. It had many. Its convention crafted it over the course of 1787’s summer and finally adopted on September 17. To take effect, it had to be ratified. The 13 states did over three years — Delaware first on December 17, 1787; New Hampshire putting it into effect on June 21, 1788; and Rhode Island finally doing so on May 29, 1790.
The Declaration of Independence is also advantaged by being simpler and more accessible. Despite its list of specific grievances, its message is general: “That these United Colonies are, and of Right, ought to be free…” Literally revolutionary and divisive then, its message is simple and unifying now: Freedom.
If the Declaration of Independence provides all the “w’s” — America’s who, what, when, where, and why — the Constitution was stuck with the unenviable task of supplying the “how.” In contrast to the Declaration of Independence’s general call for freedom, the Constitution was left with the far more difficult task of defining how freedom would govern. Where the Declaration of Independence offers freedom’s limitless promise, the Constitution is stuck with its limiting reality.
How to Limit Freedom as Little as Possible
During the Constitutional Convention, Alexander Hamilton underscored the job’s difficulty: “I believe the British government forms the best model the world ever produced.” Of course, the colonies had severed themselves from that very government. They could not then simply replicate it here, even had they possessed the means to do so. Instead, they had to forge into the uncharted.
In contrast to the Declaration of Independence’s more enviable task of proclaiming freedom, the Constitution had to limit it. Inevitably that meant giving up some to a national government. Conscious of this, its designers sought to strictly limit the national government’s authority — something hardly matching any extant government models of their day, or today’s, for that matter.
For this reason, the Constitution’s largest limitations on freedom are all on the national government it implements. Its original text narrowly defines the national government’s role. Its first ten amendments, the Bill of Rights, explicitly limit it further. The last two of these, the Ninth and Tenth, reserve the rights not named in the Constitution to the people and the states.
Since then, only two amendments have increased government power over the individual and one of those — the Eighteenth, authorizing Prohibition — was later repealed by the Twenty-First.
The Constitution Makes It Possible For Us to Celebrate
Because of its more difficult task, the Constitution is far more divisive than the Declaration of Independence. We still fight over it, just as its crafters did, with the fundamental divide still being over its original purpose to create, but strictly limit, government.
While America declared its freedom on this day more than two centuries ago, it lived just 11 years under the freedom it declared, ineffectually waging a war and ineffectively governing under it. On the brink of dissolution, the 13 original states seized upon the Constitution as their solution. For 230 years, it has stood up to its more difficult job of implementing freedom.
To appreciate that job’s difficulty, consider how many peoples have declared their freedom since then, and how few have effectively retained it.
Recognizing the Constitution’s irreplaceable role in no way disparages the Declaration of Independence’s promise and courage. It is rightly honored on this, its and America’s day.
However, it is worth pausing to remember the reason we recall it so fondly is the document that followed it and embodied it so effectively. It designed a government from whole cloth then strictly limited its scope — both unheard of at the time. The Constitution’s quiet effectiveness is the reason we loudly celebrate the Declaration of Independence today. |
What are Digital Signatures?
As an inevitable consequence of the Internet, electronic communication has become acceptable in many contexts, altogether replacing traditional written communication. However, traditional documents were usually validated with the presence of a signature, which is more difficult to attest on an electronic document. A digital signature conceptually mimics a person’s unique signature, in that it validates an electronic document.
Digital signatures are composed of two elements – a message hash and a private key. A message hash is a uniquely generated sequence of numbers, which cannot be reverse engineered to obtain the original message. The hash is then encrypted using the sender’s private key. The recipient decrypts the hash using the sender’s public key. The electronic document is also run through the hashing algorithm to check whether both the hashes are the same- thereby confirming the sender did indeed author the document in its current form, and it was not altered in any way before reaching its intended recipient.
Understanding the Email Encryption Process
There are a number of methods by which emails can be encrypted for secure transfer, and public-key encryption is one of them. The premise is similar to that used when digitally signing electronic documents, in that there is a pair of keys generated- one private and the other public.
In the case of email encryption, the idea is to keep the contents secure from all unauthorized viewing. Therefore a sender identifies a recipient’s public key, and encrypts the email with that key. The private key is retained with the recipient, and is used to decrypt all communication encrypted using the corresponding public key.
Using this method, any communication encrypted using the recipient’s public key can only be read by those people with access to the correct private key. Therefore all senders can be assured of complete privacy of their emails, given that the private key is secure.
Comparison of Digital Signatures and Email Encryption
Although both methods of encryption use asymmetrical keys, the goal of a digital signature and that of email encryption are entirely different. A digital signature is used to verify that a particular electronic document was created by a particular individual and has not been altered in the transmission process. The process is used to authenticate the author and the contents of the document beyond a shadow of doubt. Email encryption, on the other hand, is used to maintain the privacy of the contents of an email. Generally, information that should not be privy to everyone is subject to email encryption.
When implementing digital signatures, the public key is used for decryption while its corresponding private key is used for encryption. The process for email encryption is exactly the opposite.
Email encryption and digital signatures are certainly not mutually exclusive, even though they have differences. There are occasions where establishing the identity of an email’s author is equally as important as maintaining the security of its contents. This is a scenario where both technologies would be used in conjunction with each other. |
Head Start programs must ensure teachers and relevant staff provide responsive care, effective teaching, and an organized learning environment. They also must promote children's healthy growth and development in ways that align with school readiness goals.
Early childhood education includes the following categories:
- Dual Language Learners
- Learning Environments and Engaging Interactions
- Highly Individualized Teaching and Learning
Teachers and caregivers use responsive strategies to enhance the development of culturally and linguistically diverse children, including those who are dual language learners (DLLs). They also use a variety of assessment practices to determine and address the individual needs of children with disabilities, or those who may require special accommodations and curricular adaptations.
Professional development providers, education managers, and supervisors can use the resource sets below to support staff engaged in early education and care for all children ages birth to 5. Explore resources for successfully conducting workshops and sessions targeted toward improving teaching practices. Materials include presentations and handouts.
Dual Language Learners
See how professional development providers can work with teachers to help young children who are DLLs maintain their home languages and learn English.
- Dialogic Reading that Supports Children Who Are Dual Language Learners and Their Families
- 60 Minutes from Catalogue to Classroom (C2C)—Using Journal Articles for Professional Development:
Learning Environments and Engaging Interactions
Learn how to design environments that are responsive to children's active learning and promote relationship-building. Effective instructional strategies can build positive relationships with adults and peers.
- Designing Environments
- Classroom Transitions
- Zoning to Maximize Learning
- Environmental Support
- Schedules and Routines
- Creating a Caring Community
- Peer Support
- Teacher-to-Teacher Talk
- Adult Support
- The Teaching Loop
- Digging Deeper: Looking Beyond Behavior to Discover Meaning
- Making Learning Meaningful
- Activity Matrix: Organizing Learning Throughout the Day
Highly Individualized Teaching and Learning
Observe children to determine their individual strengths and abilities. Learn to effectively address these differences to promote learning and development for all children.
- SpecialQuest Multimedia Training Library
- Activity Simplification
- Invisible Support
- Materials to Support Learning
- Materials Adaptation
- Break It Down: Turning Goals into Everyday Teaching Opportunities
- Child Preferences
- Putting It Into Action
- Special Equipment
- Following Children's Lead
- Using Checklists
- Using Data to Inform Teaching
- Planning for Assessment
- Collecting and Using Anecdotal Records
- Collecting and Using Work Samples
- Collecting and Using Video
Resource Type: Article
National Centers: Office of Head Start
Last Updated: May 23, 2018 |
Create a Network Diagram of Your Work Plan
Planning and organizing is time well spent when managing a project. A network diagram is a flowchart that illustrates the order in which you plan to perform project activities. No matter how complex your project is, its network diagram has the following three elements: milestones, activities, and durations.
Milestone: Milestones take no time and consume no resources; they occur instantaneously. Think of them as signposts that signify a point in your trip to project completion. Milestones mark the start or end of one or more activities.
Activity: An activity is a component of work performed during the course of a project. Activities take time and consume resources; you describe them using action verbs. Examples of activities are design report and conduct survey.
Make sure you define activities and milestones clearly. The more clearly you define them, the more accurately you can estimate the time and resources needed to perform them, the more easily you can assign them to someone else, and the more meaningful your reporting of schedule progress becomes.
Duration: This represents the total number of work periods it takes to complete an activity. The amount of work effort required to complete the activity, people’s availability, and whether people can work on the activity at the same time all affect the activity’s duration.
Understanding the basis of a duration estimate helps you figure out ways to reduce it. For example, suppose you estimate that testing a software package requires that it run for 24 hours on a computer. If you can use the computer only 6 hours in any one day, the duration for your software test is four days. Doubling the number of people working on the test won’t reduce the duration to two days, but getting approval to use the computer for 12 hours a day will.
Determining your project’s end date requires you to choose the dates that each project activity starts and ends and the dates that each milestone is reached. You can determine these dates with the help of a network diagram.
The activity-on-node technique for drawing a network diagram uses the following three symbols to describe the diagram’s three elements:
Boxes: Boxes represent activities and milestones. If the duration is 0, it’s a milestone; if it’s greater than 0, it’s an activity. Note that milestone boxes are sometimes highlighted with lines that are bold, double, or otherwise more noticeable.
Letter t: The letter t represents duration.
Arrows: Arrows represent the direction work flows from one activity or milestone to the next. Upon completion of an activity or reaching of a milestone, you can proceed either to a milestone or directly to another activity as indicated by the arrow(s) leaving that activity or milestone.
In this simple example of an activity-on-node network diagram, when you reach Milestone A (the box on the left), you can perform Activity 1 (the box in the middle), which you estimated will take two weeks to complete. Upon completing Activity 1, you reach Milestone B (the box on the right). The arrows indicate the direction of workflow. |
21 May Early Brain Development
Many years of research has demonstrated that high academic, high quality early childhood education produces short and long term effects on cognitive and social development of children. All children should have access to a high quality education program.
Brain Development in Ages 0-5
Children are born with the maximum amount of brain capacity (number of neurons) and must develop those neurons by the end of age five when 90 percent of the brain has completed development. During these years, the brain creates a “wiring path” and connects to the areas in the brain responsible for life, language, thinking, hearing, etc. The strength of the wiring path is dependent on repetition and exposure to new experiences in education.
Think of the first year of life: Infants are able to recognize their mother’s voice, learn to cry and receive food, sit up, roll over, ultimately learning how to walk. During our teacher training we stress the importance of how incredible the first year of life is.
Year two includes the most dramatic changes in the brain. This is a time of “Vocabulary Explosion.” Some say their vocabulary will quadruple during this year. As well as vocabulary and language development, this age group is also learning about themselves and having “self awareness.” They start using the “I” reference and knowing their name. It is pretty fascinating if you stop and think about all of the things children can learn in such a short about of time.
Because the brain develops so rapidly and is so dependent on experiences, this is a crucial window of opportunity to provide positive educational experiences that will result in wiring that leads to life long learning, achievement, and success later in life! |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Menapian Glacial Stage
Menapian Glacial Stage, division of Pleistocene time and deposits in northern Europe (the Pleistocene Epoch began about 2,600,000 years ago and ended about 11,700 years ago). The Menapian Glacial Stage followed the Waal Interglacial Stage and preceded the Cromerian Interglacial Stage, both periods of relatively moderate climatic conditions. Menapian sediments contain the remains of fossil animals and plants (preserved as spores and pollen) adapted to cold climes. The Menapian is correlated with the Baventian Stage of Great Britain, a unit represented by deposits formed in a marine environment. |
Genes are chemical instructions that tell a cell what job to perform. A gene is like a recipe for making a particular protein. Proteins do important jobs in every cell of the body.
Genes are written in code, with four chemicals (represented by the letters A, T, C, and G) that spell out the instructions to make a protein. People generally have two copies of most genes, one copy from each parent. |
Invasive species are plants and animals not native to B.C. or are outside their natural distribution area. They can spread rapidly, outcompete and predate on native species, dominate natural and managed areas, and alter biological communities. Invasive species can negatively impact B.C.'s environment, people and economy.
Free from their natural enemies and other constraints that keep them in check in their native ranges,
Invasive species are recognized globally as the second greatest threat to biodiversity after direct habitat loss due to humans.
- Read the indicator reports on the status of invasive species in B.C.
- Learn about species and ecosystems at risk
By monitoring and taking action we can eliminate new invasive species introductions while populations are still localized and relatively small.
Learn more about invasive:
- Amphibians & Reptiles
- Insects & Spiders
- Invertebrates Other than Insects & Spiders
Invasive Species & Industry in B.C.
Highway maintenance and construction contractors must manage invasive species at the roadside.
Invasive species affect agriculture by competing for available space and food, or by directly attacking native species, crops or landscape plants.
Invasive forest pests include insects and diseases that threaten the health of forest ecosystems.
Invasive Species in B.C. Parks
BC Parks staff, volunteers and contractors can get best management practice advice for various activities in parks and protected areas. |
In English this term, we have been reading ‘The London Eye Mystery’ by Siobhan Dowd. We have loved getting to know this book and the author. During our time with ‘The London Eye Mystery’ we have summarised each chapter, created character description using imagery and figurative language and learned about the features of mystery and suspense stories… (yes, ellipsis is one of them)! After all this learning we put it altogether and planned, wrote, edited and published our own mystery stories – they were fab! We also researched information about the London Eye and created an information leaflet. Have a look at some of the work we produced. We also studied Siobhan Dowd in detail; learned about her life and how she got into writing children’s books. She is fascinating! We collated all of our research and put it into a biography all about her.
During the Spring Term, Year 5 children have been learning about calculating with the four operations. Children assess whether a calculation can be performed in their heads, with jottings or with formal written methods. For example, children have learned how to use either chunking or the short division method for calculations of four-digit numbers divided by one or two-digit divisors. Using diagrams, models and number, children have learned how to convert between mixed numbers and improper fractions. Children have learned about finding solutions to missing value additions and subtractions of fractions.
As well as this, children have learned how to distinguish between regular and irregular polygons based upon reasoning about their sides and angles and how to calculate the area of composite rectilinear shapes. Our children have worked really hard at problem solving, finding a starting point to a challenging problem and to work systematically.
In Science this half term, we have looked at properties and changes of materials. We learned a variety of key scientific words which we were able to explain, with confidence by using examples! We looked at the properties of different materials such as if they were magnetic, transparent or permeable.
We can explain that conductors such as metals, allow electricity flow through them and insulators such as wood, plastic and rubber do not allow electricity to flow through them.
Furthermore, we carried out an investigation to identify the most suitable material needed to be included inside a packed lunch to keep the food at a cool temperature.
For the rest of the term, we studied and researched a range of important scientists and inventors such as David Attenborough and highlighted his influence and theories linked to evolution. We also studied Leonardo Da Vinci and his “Vitruvian Man” model. We tested his theories on body proportions and were amazed to see that he was somewhat accurate!
Take a look at some of his estimations; have a go at see if it works for you!
We have been learning about Crime and Punishment from the Roman Era up to present day Britain. We have compared the judicial systems throughout these important periods of history. We have learned about shocking punishments which included cutting of limbs for stealing, hanging for being homeless in the Tudor times, and drowning for being a witch in the Anglo-Saxon era. We have also engaged in role play where we took part in the “pointless” activities that criminals had to carry out in prison. These included separating individual pieces of string from a rope and rotating a handle 10,000 times before being rewarded with dinner which was a bowl of gruel. Was it all worth it? We think not. We also held a trial to decide which punishment system would be the most effective: Anglo-Saxon or Modern British? Although we agreed the Anglo-Saxon punishment of chopping off a child’s hand for stealing a sheep would probably put him off stealing another sheep for life, we decided that the Modern British punishment of having a warning was more appropriate!
During Art this term Year 5 have been learning about the artist LS Lowry. Through this topic we have researched and learning about his life and the works he created. We then used inspiration from his work and how he used perspective in his pieces to create our own. LS Lowry was known for drawing buildings and people but particularly the perspective he created when drawing buildings. What they looked like in relation to each other and how he created a 3D effect. We tried to complete this in our own work by drawing a beautiful building as well…we drew Conway Primary School.
We then followed this topic on and completed observational drawings of buildings in London. Miss Barham was super proud of the skills we showed, have a look for yourselves.
In the first half of the Spring term Year 5 looked at Christianity, thinking about key questions such as; What values do Christians believe? What did Jesus teach? And how do Christians believe Jesus taught them these values? This helped us understand what Christians believe and how this helps them live their daily lives.
We learnt about the Christian church and its importance to individuals and to the community, and explained how the teachings of Jesus influence Christians today.
In Spring 2 we looked at Islam, thinking about key questions such as; What is a pilgrimage? Why do Muslims go on Hajj? Which stories are associated with the places on Hajj? How does the Hajj make Muslims feel they are all part of one family?
We were able to explain the key features in a Mosque, and their uses. We then concentrated on Hajj, a pilgrimage that every Muslim must try to go on. We looked at the history around it, the steps needed to take to go to Hajj and the celebrations and rituals surrounding it.
Children looked at a simple maze game and thought about how the game works on a computer. Children decomposed the game into smaller parts so that they could work out an algorithm for how the computer will sequence instructions, use selection when there is a choice and how to create a condition for repeated procedures. When the code didn’t work, children analysed their code using a highlighting stepper. Then, once the bug was identified, children separated their code so that they could test that one part.
During the spring term Year 5 developed their coordination with a ball. They learnt how to maintain control of a ball, using both their left and right hand. They also worked on trying to manipulate the ball in different directions. They took all these new skills and put it into game practice and played basketball.
Next term (Summer)
Next term we will be focussing on writing adventure stories after reading ‘Way Home’ by Libby Hathorn. The book is about a boy who discovers a lonely cat and decides to keep him. They have a mini adventure through a dark city on their way home. Except, the boys home is under a bridge. We will also research homelessness and write a persuasive letter to Shane (the boy from our story) to convince him to seek some help.
In Maths We will be consolidating our skills in place value, written calculations including those involving fractions, investigating measures and converting units of measure and discovering more about the area and volume of shapes.
In English this term, our focus this term has been ‘The Highwayman’, Robin Hood’, and ‘Rose Blanche’. ‘The Highwayman’ and ‘Robin Hood’ linked to our theme of legends in writing. We read these stories, got to know the characters and understood what made them legends. We learned different skills such as writing descriptions using fronted adverbials, expanded noun phrases, adventurous vocabulary and writing speech to help picture the character, their mood and personality. We used all of this learning to write our own stories! You could say, we became legends!
We also studied ‘Rose Blanche’ by Roberto Innocenti. We loved reading this story as it was about a little girl, who lived in Germany, during the time of the Second World War. It told a story of her experiences of life during the war, which ended very sadly. The author used lots of imagery and figurative language which we have also learned about. We have learned about personification, similes and metaphors…ask us to show you one! We wrote an adaptation of this story and changed the ending using all the skills we have learned so far this term. We are also working really hard to be able to use spelling strategies to helps us read and spell and to join our writing consistently.
During the Autumn term, children have been building upon their year 4 knowledge. In year 5 children have been learning their times tables, including division facts up to 12 x 12. Children work with both larger and smaller numbers in Year 5 and so have been identifying the value of digits in numbers with 3 decimal places and numbers up to one million. Addition and subtraction written methods have been taught and children have applied them to solve practical problems. In geometry, children have learned about calculating the perimeter of different kinds of shapes, both regular and irregular. However, children found using a protractor to measure and construct angles particularly challenging!
In Science this term, we have been learning all about Space.
We can now identify all eight planets by their positions in relation to the Sun and their physical properties. An interesting fact we learned was that Mercury contains wrinkles which are called lobate scarps. This suggests that Mercury's crust could have possibly contracted.
Additionally, we engaged in discussions arguing which theory of planetary movement is correct. In the end, we all agreed that the heliocentric theory could be proven with more evidence in comparison to the geocentric theory.
As well as this, we took part in a very exciting investigation to help us understand how day and night occur. We can now explain this using examples from around the world!
We have really enjoyed our Space learning this term and are looking forward to learning about forces next half term!
In topic this term, year 5 have been studying World War 2. As part of this, the children have located countries around the world that were involved in the war on a world map. We have also looked at evacuation, rationing, the Blitz and life during the war. For the remainder of the term, we will be studying the Battle of Britain and key events that occurred during the war.
During Art this term Year 5 have created artwork based on the ‘The Blitz’. This has linked with our topic ‘The Second World War’. To achieve this we have explored a range of materials and how to manipulate materials. We used masking tape the create lines and shapes on card then used chalk and pastels to colour over the whole page. Once we removed the tape it left us with an interesting pattern. We used this to create the flood lights from the ground during the blitz. We also consolidated learning of how to make a silhouette, some of us needed some support in this as it is really tricky!
This Autumn term, Year 5 children have been learning about Computer Aided Design. We considered how computers have made the design process much more efficient from the time in the twentieth century when designers used technical drawing skills and school children attended technical drawing lessons.
Children selected a web-based CAD app called Tinkercad, with the design brief to make a virtual clubhouse, with a door, windows, a piece of art on the wall and a bench to sit on. Children started with only basic shapes, such as cuboids or prisms and had to create the right sized holes to make new shapes.
This term in PE, Year 5 have been learning the importance of teamwork. They completed a number of teambuilding games with a range of focuses including communication, leadership, co-operation and turn taking. One of the games they played was move the hoop. In this game children had to get into small groups and hold hands, only once a hoop was placed over someone’s arm. The aim of the game was to then move the hoop around the circle of children without breaking the circle. For this they needed to co-operate and communicate to ensure they were successful. The next week they played this exact same game except they had to choose a leader. This child then had to give specific instructions to the rest of the group to follow. This is where they really saw the importance of communication.
This term we have looked at ‘Faiths in Greenwich’ in Autumn 1 and ‘Sikhism’ in Autumn 2.
For ‘Faiths in Greenwich’ we looked at all the different faiths and religions that are in our borough, and the similarities and differences between them. We have learned about their beliefs and how they relate to the local community. We researched how the local community by working together can make a big difference. We created a Fairtrade poster and decided that by helping others we also help ourselves.
In Autumn 2 we have started to look at Sikhism. We have researched and noted down the contributions that the 10 Gurus have made. We will be looking in more depth at the symbols and mantras in Sikhism and how they are important. We will also be focussing on their holy place the Gurudwara discussing what they have to wear, what rules they have to follow, and researching what they look like inside and out. We will then focus on their holy book the Guru Granth Sahib and discuss its importance.
This term we had the opportunity to visit ‘Chislehurst Caves’. This was a super exciting trip and lots of children came back to school saying “it’s the best trip we’ve ever been on”! They got to experience a real-life air raid shelter. The guides took us down, class by class, to explore parts of the cave (we couldn’t explore the whole thing as its 22 miles long)! We learned about the people who used the cave when there was danger during World War One and Two. Some people even lived in the caves as they sadly lost their homes. The caves were like a village with everything you could want and more! They had bunks, toilets, shops, cafes, a hospital, a stage, a barber, a post office and even a citizen’s advice bureau! The children enjoyed walking around and experiencing cave life. The adults were given lanterns to light the way although this did not help greatly as it was incredibly dark. They experienced what it would have been like when bombs were dropping. All the lights went out, we were left in complete darkness (we couldn’t even see our hands in front of our faces) and the guides created a sound that echoed throughout the area we were in. You can imagine the screams from the girls…and boys!
Next term (Spring)
Next term we will be focussing on writing mysteries and suspense novels in English. We will link this to our book ‘The London Eye Mystery’. We will also be writing reports and instructions as our non-fiction learning. In addition to these text types we will be looking into poetry during the Spring term and focusing on rap.
We will consolidate our learning of place value, the number system and multiplication and division. We will be continuing our learning of the chunking method for division, ask us to practise at home. We will learn more about measurements including length, mass and weight and link this to multiplication and division. We will explore money in greater depth, fractions time and position and direction.
Our topic for Spring 1 is ‘Crime and Punishment’ and for Spring 2 it will be ‘Leisure and Entertainment in the 20th Century’. In Science we will be learning all about ‘Properties and Changes of Materials’.
If you would like to have a look at the national curriculum website please click here. |
Five years ago I bought this little book to teach the expression:
How long does it take to do something?
The book is definitely focused on children's everyday life and suggests them to consider the amount of time they need to do things like zipping up a jacket, going to school by bike, filling in a bucket with sand, taking their shoes off, washing the dog etc.
My students find it quite entertaining, so I wanted to keep thinking about time. I came out with these actions that anybody can time inside any classroom:
- How long does it take to jump 20 times?
- How long does it take to say the English alphabet?
- How long does it take to say "I can speak English" 10 times?
- How long does it take to pile up all your books?
- How long does it take to take everything out of your schoolbag and then put it back in?
I'm sure you can think about many other enjoyable things to do and time.
Let the children write down the questions and the answers, it'll help consolidate them in their minds.
Finally encourage them to think of their own. |
Common Core State Standards - Grade 5 Mathematics
These worksheets contain math problems aligned with the 5th grade Common Core State Standards (CCSS) listed below.
[5.MD Measurement and Data ]
5. Relate volume to the operations of multiplication and addition and solve real world and mathematical problems involving volume.
a. Find the volume of a right rectangular prism with whole-number side lengths by packing it with unit cubes, and show that the volume is the same as would be found by multiplying the edge lengths, equivalently by multiplying the height by the area of the base. Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication.
b. Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems.
c. Recognize volume as additive. Find volumes of solid figures composed of two non-overlapping right
rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real world problems.
Find the full list of standards here: http://www.corestandards.org/the-standards/mathematics/grade-5/introduction/
Use them for:
- standardized test prep
- assessment of common core standards
- homework assignments
- review sheets
Teacher answer key is included.
If you're buying these worksheets for a class, consider purchasing the full 5th Grade Common Core Mathematics Bundle: You'll save over 33% compared to buying the 42 packets individually! |
Duck-billed platypus, Boondaburra, Mallangong, Tambreet, Tohunbuck
The platypus is a most unusual animal. They have thick fur that keeps them warm underwater. Most of their fur is dark brown, with a lighter patch near their eyes, and a lighter color on the underside. On their front feet is extra skin that serves as a paddle when they swim. They walk clumsily on their knuckles in order to protect this webbed skin. Their bill is smooth, flexible and rubbery, and feels like suede. The male features a venomous spike on it back foot which has enough poison to cause severe pain for a human.
Platypuses are found on Eastern and Southeastern coasts of Australia as well as Tasmania, Flinders and King Islands. There is also small introduced population on Kangaroo Island. Platypuses are restricted to streams and suitable freshwater bodies, including some shallow water storage lakes and ponds.
Platypuses are solitary, particularly males. If their territories overlap, they will feed at different times to avoid each other. They are nocturnal and sleep during the day. They spend much time in water and rarely seen moving on land. They waddle onto the banks of the river to dig burrows, which are tunnels with rooms. They also live under roots, debris or rock ledges. They spend a lot of time hunting for food, up to 10 to 12 hours and remain in their burrows when not hunting.
The platypus is carnivorous, feeding on annelid worms, freshwater shrimp, insect larvae, and freshwater yabby dug out with its snout from the riverbed or caught while swimming. It carries prey to the surface in its cheek-pouches. Each day it needs to eat as much as 20% of its own weight, so each day it must spend about 12 hours hunting for food.
Platypuses are polygynandrous, and males and females both have several partners. Females can first mate at the age of 2, but some don't until they are 5. The breeding season is between the Australian winter months of June and October. When females are ready to give birth, they burrow into the ground to seal themselves off in one of the rooms. She lays 1 or 2 eggs and keeps them warm between her rump and tail. The eggs hatch after about 10 days. The little bean-sized young remain nursing for 4 to 5 months. They stay in their burrow until they gain about 80 percent of their adult weight, around 6 months.
The largest threat is loss of habitat due to land clearance and water pollution. Predators are snakes, goannas, water rats, and foxes.
The ICUN lists the platypus as a "Least Concern" with decreasing population trend, but actually has very little data about population numbers.
The platypus, being a carnivore controls the populations of the species that it eats. |
A MAC (media access control) address is a unique numeric code that is permanently assigned to each unit of most types of networking hardware, such as network interface cards (NICs), by the manufacturer at the factory.
An NIC, also referred to as a network adapter, is a circuit board that is plugged into a slot on a motherboard (the main circuit board on a computer) to enable a computer to physically connect to a network cable and thereby communicate over a network (i.e., to one or more other computers). Some computers use network interface adapter circuitry that is built directly into the motherboard instead of a separate card.
The purpose of MAC addresses is to provide a unique hardware address or physical address for every node on a local area network (LAN) or other network. A node is a point at which a computer or other device (e.g., a printer or router) is connected to the network.
The code is most commonly a 48-bit hexadecimal (i.e., base 16) number, which consists of 12 characters. They are arranged in six pairs, each separated by a colon. A typical MAC will look something like 00:10:B5:C4:99:6A. The first 24 bits (three bytes) identify the manufacturer, and the remaining bits uniquely identify the type of device and provide a specific serial number for the unit.
When a computer is connected to a network, a correspondence table relates the computer's IP address to its physical address on the network. The MAC addresses of the sending computers are contained in the header of each packet, thus allowing packets to arrive at their intended destination.
An IP address is an identifier for a computer or other device on most networks, including the Internet. Every message sent over such networks is divided into packets prior to transmission and and then reassembled into the original message at the destination. The header also contains the IP addresses of the sender and receiver along with other information needed to move the packets from the source to the destination and reassemble them.
Although MAC addresses are generally described as being permanent, it is possible for users to change them.
Created September 15, 2005. |
|Online Judge||Problem Set||Authors||Online Contests||User|
A Mini Locomotive
A train has a locomotive that pulls the train with its many passenger coaches. If the locomotive breaks down, there is no way to pull the train. Therefore, the office of railroads decided to distribute three mini locomotives to each station. A mini locomotive can pull only a few passenger coaches. If a locomotive breaks down, three mini locomotives cannot pull all passenger coaches. So, the office of railroads made a decision as follows:
1. Set the number of maximum passenger coaches a mini locomotive can pull, and a mini locomotive will not pull over the number. The number is same for all three locomotives.
2. With three mini locomotives, let them transport the maximum number of passengers to destination. The office already knew the number of passengers in each passenger coach, and no passengers are allowed to move between coaches.
3. Each mini locomotive pulls consecutive passenger coaches. Right after the locomotive, passenger coaches have numbers starting from 1.
For example, assume there are 7 passenger coaches, and one mini locomotive can pull a maximum of 2 passenger coaches. The number of passengers in the passenger coaches, in order from 1 to 7, is 35, 40, 50, 10, 30, 45, and 60.
If three mini locomotives pull passenger coaches 1-2, 3-4, and 6-7, they can transport 240 passengers. In this example, three mini locomotives cannot transport more than 240 passengers.
Given the number of passenger coaches, the number of passengers in each passenger coach, and the maximum number of passenger coaches which can be pulled by a mini locomotive, write a program to find the maximum number of passengers which can be transported by the three mini locomotives.
The first line of the input contains a single integer t (1 <= t <= 11), the number of test cases, followed by the input data for each test case. The input for each test case will be as follows:
The first line of the input file contains the number of passenger coaches, which will not exceed 50,000. The second line contains a list of space separated integers giving the number of passengers in each coach, such that the ith number of in this line is the number of passengers in coach i. No coach holds more than 100 passengers. The third line contains the maximum number of passenger coaches which can be pulled by a single mini locomotive. This number will not exceed 1/3 of the number of passenger coaches.
There should be one line per test case, containing the maximum number of passengers which can be transported by the three mini locomotives.
1 7 35 40 50 10 30 45 60 2
[Submit] [Go Back] [Status] [Discuss]
Home Page Go Back To top
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator |
Cheetah Conservation Botswana
To conserve cheetahs in Botswana by means of research, community outreach and environmental education
The Cheetah, Acinonyx jubatus, is Africa's most endangered big cat. Cheetah populations are dramatically declining. The species is now threatened with extinction due to loss of habitat and prey, a diminishing gene pool and human persecution.
Botswana contains one of the largest remaining populations of free ranging cheetahs in the world. In 1998 it was estimated at 1768 individuals (Funston et al 2000), this represents 12% of the world population. Identifying Botswana as one of the last strongholds of the species. However, populations are not safe within protected areas as they are outcompeted by stronger predators. The cheetahs then move out onto marginal land where they come into conflict with rural communities. Neither protected reserves or captive management can be relied upon to support viable populations of the species. (Marker et al 1996). Long term survival is dependent on conservation management of agricultural zones. (Winterbach 2001, National Predator Strategy).
Despite being listed as a species threatened with extinction under Appendix I of the Convention on International Trade in Endangered Species (CITES) and vulnerable by the IUCN, no formalised studies have ever been done and little is known about the status of the cheetah in Botswana. CCB was set up in 2003 to address the need for a conservation program focused on this endangered cat. The project works to ensure the survival of the cheetah through scientific research, rural community education and participation, and promotion of alternative, non-lethal predator control methods and appropriate livestock management.
Cheetah Conservation Botswana (CCB) is a long term multidisciplinary project incorporating practical conservation, scientific research, community participation and education. We are carrying out a nationwide survey to assess the status of the cheetah (Acinonyx jubatus), focusing on their role in livestock/predator conflicts. Identifying priority areas, to which we focus education and information programmes on predator ecology, non lethal methods of predator control and appropriate livestock management. Encouraging rural communities to view their wildlife as a valuable national resource to be protected. The cheetah acts as a flagship species for the biodiversity of these areas.
CCB are carrying out research into cheetah behaviour, home ranges, density, disease and genetics, at their field camp in the Southern Kalahari. Botswana cheetah, have never been studied formally before so this is the first study of its kind. Cheetahs are collared for telemetry studies. Camera traps are utilized and tracking transects made to assess density. Samples are taken from captured cheetahs to gather information on disease and genetics status.
CCB conduct an educational outreach programme to inform rural communities about the importance of predators. They make visits to affected communities to assess their problems and offer solutions. Providing them with information on appropriate farm management and non lethal methods of predator control. CCB encourage active participation from the communities, and also provide educational booklets to schools to learn about these issues and conduct awareness raising talks and predator workshops throughout Botswana.
CCB network with other cheetah groups internationally to provide the Botswana perspective to the global understanding of these delicate predators and are part of the Global Cheetah Forum under the IUCN.
WAZA Conservation Project 05023 is supported by the Banham Zoo, Suffolk Wildlife Park (UK), Columbus Zoo, Cincinnati Zoo, Toledo Zoo , Tulsa Zoo (US), New South Wales Zoological Association, as well as by Wildlife Conservation Network, the Howard Buffet Foundation, IdeaWild; WILD Foundation, Rufford Small Grants, Project Survival, and Flora and Fauna International.
> a vista general Proyecto |
CATCH a fallen star. The Cassini spacecraft orbiting Saturn has picked up three dozen specks of interstellar stardust, a find that will help astronomers understand how bits of exploded stars are reborn in new star systems.
Though mostly cold and empty, interstellar space contains wisps of gas and fine particles of dust that were released in the fiery deaths of giant stars. Similar in size and composition to grains of sand, the dust particles are a record of the heavy elements that determine the chemical make-up of new stars and galaxies.
“They are of fundamental importance to understanding what primordial ‘bricks’ we come from,” says Nicolas Altobelli of the European Space Agency in Madrid, Spain. He and colleagues have used the dust-analysing instrument on Cassini to find a record 36 particles, mostly made of magnesium, calcium, iron, silicon and oxygen.
The particles were heavily weathered, which suggests they underwent major changes while in interstellar space, Altobelli says. Current wisdom says that pristine grains are battered by the process of star and planet-forming, but the new results suggest part of this process happens earlier, he says (Science, doi.org/bd8g).
This article appeared in print under the headline “Saturn probe hoovers up stardust” |
Apraxia: Symptoms, Causes, Tests, Treatments
What Are the Symptoms of Apraxia of Speech?
There are a variety of speech-related symptoms that can be associated with apraxia, including:
- Difficulty stringing syllables together in the appropriate order to make words, or inability to do so
- Minimal babbling during infancy
- Difficulty saying long or complex words
- Repeated attempts at pronunciation of words
- Speech inconsistencies, such as being able to say a sound or word properly at certain times but not others
- Incorrect inflections or stresses on certain sounds or words
- Excessive use of nonverbal forms of communication
- Distorting of vowel sounds
- Omitting consonants at the beginnings and ends of words
- Seeming to grope or struggle to make words
Childhood apraxia of speech rarely occurs alone. It is often accompanied by other language or cognitive deficits, which may cause:
- Limited vocabulary
- Grammatical problems
- Problems with coordination and fine motor skills
- Difficulties chewing and swallowing
What Causes Apraxia of Speech?
Acquired apraxia results from brain damage to those areas of the brain that control the ability to speak. Conditions that may produce acquired apraxia include head trauma, stroke, or a brain tumor.
Experts do not yet understand what causes childhood apraxia of speech. Some scientists believe that it results from signaling problems between the brain and the muscles used for speaking.
Ongoing research is focusing on whether brain abnormalities that cause apraxia of speech can be identified. Other research is looking for genetic causes of apraxia. Some studies are trying to determine exactly which parts of the brain are linked to the condition.
Are There Tests to Diagnose Apraxia of Speech?
There is not a single test or procedure that is used to diagnose apraxia of speech. Diagnosis is complicated by the fact that speech-language pathologists have different opinions about which symptoms indicate developmental apraxia.
Most experts, though, look for the presence of multiple, common apraxia symptoms. They may assess a patient's ability to repeat a word multiple times. Or they may assess whether a person can recite a list of words that are increasingly more difficult, such as "play, playful, playfully."
A speech-language pathologist may interact with a child to assess which sounds, syllables, and words the child is able to make and understand. The pathologist will also examine the child's mouth, tongue, and face for any structural problems that might be causing apraxia symptoms. |
Vireo belli pusillus
- Federal Endangered Species, State Endangered Species
The least Bell’s vireo is a small, secretive songbird (4-5 inches long) with short, round wings, straight bills, and feathers that are gray above and pale below. Vireos were historically widespread in California’s riparian woodlands and low-elevation riverine valleys, but over 95% of riparian habitat has been lost, accounting for 60 to 80% of the population loss. Breeding populations are now restricted to San Diego, Riverside, and Santa Barbara counties – including the Los Padres National Forest.
The least Bell’s vireo inhabits low-elevation, riparian habitats with a dense shrub understory that is near water. The ideal habitat contains both canopy and shrub layers. They prefer to nest in willows but will also use shrubs, trees, and vines. Most least Bell’s vireos are found below 2,000 feet elevation.
Males begin to establish territories by late March and egg laying usually starts in April. Clutch size (the number of eggs layed) ranges from three to five, and most Californian pairs produce one to two broods per season. The birds are neotropical migrants who leave their breeding range in California to winter in Baja California.
Least Bell’s Vireo in the Los Padres National Forest
While the least Bell’s vireo occurs on all four Southern California national forests, the largest population is found in the Los Padres National Forest, where Mono and Indian creeks meet the Santa Ynez River near Gibraltar Reservoir. This is the only formally-designated critical habitat on National Forest System lands, so it receives special protection under the Endangered Species Act. In 1980, there were 55 breeding pairs, which declined to less than 30 by 1994; in the mid-2000s, the Forest Service reported “less than 12 pairs,” and a recent survey in 2013 did not detect any pairs, suggesting that this population might now be extirpated. Additional surveys are needed to determine whether least Bell’s vireos still breed in this area.
The least Bell’s vireo is also found in the Santa Clara River watershed in Ventura County. Suitable nesting habitat occurs in Piru Creek and Sespe Creek. There may be suitable habitat in the Santa Maria/Cuyama River watershed as well.
The least Bell’s vireo’s decline is mainly due to loss and degradation of breeding habitat as well as brood parasitism by brown-headed cowbirds. The shrub cover that the birds require is threatened by roads, overgrazing, concentrated recreation use, fire, and invasive species, according to the U.S. Forest Service. Arundo and tamarisk invasions displace the bird’s native habitat.
The least Bell’s vireo was formally classified as an endangered species in 1986, and its critical habitat was formally designated in 1994. The U.S. Fish & Wildlife Service prepared a draft recovery plan for the vireo in 1998, but that plan was never finalized. Since the bird was listed as endangered, the U.S. population has increased ten-fold from 291 to 2,968 known territories. However, the population in the Los Padres National Forest along the Santa Ynez River has declined 54% since listing.
In the Los Padres National Forest, the easternmost 2.6 miles of Camuesa Road was permanently closed to vehicles and seasonal closures were instigated on Mono Campground in hopes that these efforts will help conserve the least Bell’s vireo.
ForestWatch is working to ensure that riparian areas used by the least Bell’s vireo are protected from development and other activities, hoping to reduce the downward trend of this important northern-most breeding population. |
So I understand that a matrix A = sym(A) + skew(A) = 1/2(A + Atranspose) + 1/2(A - Atranspose)
I cant seem to understand why multiplying any matrix A by a symmetric or skew matrix B is equal to B.sym(A) or B.skew(A), respectively. in other words:
B.A = B.sym(A) when B is symmetric and
B.A = B.skew(A) when B is a skew matrix
any help is appreciated. (B.A is the inner product of B and A in case there is any confusion)
Actually, I wasn't confused until you said "B.A is the dot product of B and A"! B and A are matrices- what do you mean by the "dot product" of two matrices?
Originally Posted by khughes
If you simply mean the matrix product, then what you have written is NOT true.
BA is NOT equal to B(symm(A)) when B is a symmetric matrix.
edited, sorry for the confusion.
I don't see anything new!
And, if A.B means simply matrix multiplication, again, the statement is NOT true.
For example, suppose and B is the symmetric matrix
Not at all the same thing!
Gah! im sorry, i reread my book and A.B is the scalar product, so A.B = tr(Atranspose*B).
that made life easier. managed to figure it out.
B.sym(A) = tr(B(.5(A+Atranspose)) = .5[ tr(BA) + tr(BtransposeAtranspose)] = .5(2tr(BA)) = tr(BA) = B.A
similar for other one |
Researchers at CSU have teamed up with NASA to test water-saving technology on California crops
By Vinnee Tong
Near the Central Valley town of Los Banos, Anthony Pereira opens a tap to send water into the fields at his family’s farm. Pereira grows cotton, alfalfa and tomatoes. And he is constantly deciding how much water is the right amount to use.
“Water savings is always an issue,” he says. “That’s why we’re going drip here on this ranch. We gotta try to save what we can now for the years to come.”
Thanks to some new technology, that might get a little easier. To help farmers like Pereira, engineers at NASA and CSU Monterey Bay are developing an online tool that can estimate how much water a field might need. Here’s how it works: satellites orbiting the earth take high-resolution pictures — so detailed that you can zoom in to a quarter of an acre.
“The satellite data is allowing us to get a measurement of how the crop is developing,” says CSUMB scientist Forrest Melton, the lead researcher on the project. “We’re actually measuring the fraction of the field that’s covered by green, growing vegetation.”
Those images are combined with data they’re collecting right now at a dozen California farms from Redding to Bakersfield and from Salinas to Visalia.
In Pereira’s fields a tractor carrying tomato seedlings leads the way as farm workers nestle the plants into the dirt. Alongside them the researchers drill holes in the ground to put sensors underneath and around the crops. The sensors measure wind temperature, radiant energy from the sun and how thirsty the soil may be on a given day.
Walking through the field, researcher Chris Lund is carrying equipment that will collect all that data.
“Once a minute it’ll take a measurement of all the sensors that are attached to this,” he explains, “the soil moisture sensors, the soil water potential sensors, and in this case the capillary lysimeter, which measures how much water is going out the bottom of the system.”
Using this information with the satellite images that are updated about once a week, the researchers have come up with a formula that can estimate how much water a field might need. Farmers will soon be able to access estimates for their fields online and eventually they’ll be able to use their cell phones.
That means Pereira will no longer have to rely on the old-school way of deciding how much water to use.
“Before, everything was furrow-irrigated or flood-irrigated, and we’d just schedule depending on what the weather is,” he told me. “If it’s warm, we say, ‘OK we’re going to try to irrigate every two weeks.’ If it’s cooler, then let’s try to stretch it out another week, 10 days or so to make the water stretch out more.”
The California Department of Water Resources estimates water savings could amount to hundreds of dollars per acre, and the crop yield could be better, too. The joint research team sees its water-saving tool as something that could be used by any farmer. At the Ames Research Center in Mountain View, NASA’s Rama Nemani studies a map of the world mounted on a wall.
“If you look at the map like this, there are a lot of areas that are like California that are starved for water but need to still produce food,” he says. “So we have to figure out how to use whatever limited water each place has to the best possible extent.”
This online water saving tool could be available at no cost to farmers around the state as soon as next year, and eventually to farmers around the world.
Hear the radio version of this story from KQED’s The California Report. |
Last month, planetary science suggested a new explanation of the dramatic asymmetry of our Moon’s two sides; one, “our” side, is flat and low, while the other (the “dark side”) is a mountainous terrain. The dichotomy could arise, researchers say, from a collision between our moon and a smaller companion. As a smaller body, had the companion moonlet crashed into the moon at a low enough speed, it would have cooled more quickly, avoiding vaporization and simply smearing itself across the impact crater it created. Visually, it would go something like this:
The Two Moons theory would explain the vast differences in both composition and geography, but can it be proven?
Perhaps so. Early this morning, high-level winds delayed the launch of NASA’s GRAIL mission (Gravity Recovery and Interior Laboratory), a twin spacecraft designed to determine the structure of the lunar interior.
The twin spacecraft are now scheduled to begin their mission to the moon on Sept. 9, lifting off from Cape Canaveral Air Force Station’s Launch Complex 17B aboard a United Launch Alliance Delta II heavy rocket. There are once again two instantaneous (one-second) launch windows. Friday’s launch times are 8:33:25 a.m. and 9:12:31 a.m. EDT. The launch period extends through Oct. 19, with liftoff occurring approximately 4 minutes earlier each day.
Friday, NASA will attempt another launch, sending the GRAIL on its four-month journey to the moon by way of a new route that takes the spacecraft pair first on a one-lap tour of the planet, then after separating, toward the Sun. At the point when the Earth’s gravity balances the pair in orbit, they’ll hang out for a couple months before heading to the moon. The timing is important–the spacecraft have to avoid two lunar eclipses, which would block the sunlight needed to power the GRAIL probes.
Once there, the twin probes will utilize the same technology as GRACE, the mission which mapped the Earth’s gravity. Information from a full lunar gravity scan will give clues to the Moon’s composition, and, just maybe, tell us whether or not we did in fact once have two moons.
Live launch coverage will begin tomorrow morning (Friday, Sep 9) at 5:45 a.m. on NASA TV and on the web at www.nasa.gov/ntv and www.nasa.gov/mission_pages/grail/launch/grail_blog.html. |
The shingle urchin or helmet urchin (Colobocentrotus atratus) is a species of sea urchin in the family Echinometridae. In Hawaii it is known as "kaupali" which translates as "cliff-clinging". It is found on wave-swept intertidal shores in the Indo-West Pacific, particularly on the shores of Hawaii.
This urchin is a deep maroon colour and shaped like a domed limpet. It can grow as big as a soft ball but is usually much smaller. The upper surface is a mosaic of tiny polygonal plates formed from modified spines to form a smooth mosaic. This is fringed by a ring of large, flattened modified spines. On the underside there is another ring of smaller flattened spines and a large number of tube feet.
In a test comparing shingle urchins to other species of urchin, it was found that their ability to withstand being washed away by moving water excelled. A combination of their shape, their flattened spines and particularly the strong adhesion of their tube feet made them three times as resistant as other species such as Echinometra. This enables them to live on inhospitable wave-battered shorelines. |
Lesson Title: On Target Challenge
Students will modify a paper cup so it can zip down a line and drop a marble onto a target.
• Apply the engineering design process.
• Modify a cup to carry a marble down a zip line.
• Test their cup by sliding it down the zip line, releasing the marble and trying to hit a target on the floor.
• Improve their system based on testing results.
Lesson Activities and Sequence
Access the On Target
Keywords: engineering challenge, scientific method, rockets, moon, Newton's laws, engineering design process, moon, Mars, acceleration, vector, trajectory, potential energy, kinetic energy, LCROSS
- Introduce the challenge: Tell kids how NASA will use the LCROSS spacecraft to search for water on the moon (scripted in the Leader Notes).
- Brainstorm and design: Students should be working in cooperative groups to develop a group design and using individual journals to record their decisions, design sketches, test results, etc.
- Build, test, evaluate and redesign: Test data, solutions, modifications, etc., should all be recorded in their journals individually.
- Discuss what happened: Ask the students to show each other their modified cups and talk about how they solved any problems that came up.
- Evaluation: Using the students' journal entries, assess their mastery of content, skills and the engineering design process.
National Science Education Standards, NSTA
Science as Inquiry
• Understanding of scientific concepts.
• Understanding of the nature of science.
• Skills necessary to become independent inquirers about the natural world.
• The dispositions to use the skills, abilities and attitudes associated with science.
• Position and motion of objects.
Common Core State Standards for Mathematics, NCTM
Expressions and Equations
• Apply and extend previous understandings of arithmetic to algebraic expressions.
• Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
• Understand the connections between proportional relationships, lines and linear equations.
ISTE NETS and Performance Indicators for Students, ISTE
Creativity and Innovation
• Apply existing knowledge to generate new ideas, products or processes.
• Create original works as a means of personal or group expression.
• Use models and simulations to explore complex systems and issues.
• Identify trends and forecast possibilities.
Critical Thinking, Problem Solving and Decision Making
• Identify and define authentic problems and significant questions for investigation.
• Plan and manage activities to develop a solution or complete a project.
• Collect and analyze data to identify solutions and/or make informed decisions.
• Use multiple processes and diverse perspectives to explore alternative solutions. |
What Flora and Fauna would we find in the Amazon
The plants inclufding the leaves, stems, roots, fruit and seeds become food for the animals and are called producers.Herbivores or primary consumers are known for animals that eat plants. In the amazon these include the common iguana, the tree toed sloth, toucans and thousands of birds, insects and mammals. Carnivores are known for aniamls that eat eggs or meat of other animals and those that eat insects are insectivores.These animals are known as secondary consumers. These are the animals that represent secondary consumers in the Amazon, harpy eagles, jaguars, giant anteaters, birds, snakes, monkeys and insects.
In the tropics the climate allows for rapid and diverse plant growth.One hectare of the Amazon could contain 750 types of trees and 1500 species of other plants. Types of plants that grow are trees such as ebony and mahogany, vines, lianas, palms and ferns.
Rafflesia, the world's largest single flower is found in the rainforests of southeast asia. This flower was discovered in the Indonesian rainforest by a guide in 1818. It belongs to the genus of parasitic flowering plants, with no visible leaves, roots, or stem because it does'nt have any, the only part of the plant that can be seen outside in the five-pedaled flower. The rafflesia may be over 100 centimetres in diameter and weigh up to 10 kilogram. Rafflesia flowers only survive for a few daysbefore decomposing. Only one vine in the worldis hardy enough to host the rafflesia parasite.It is also the smelliest flower to attract insects and flys. It is the most rarest flowers because nearly perfection conditions must exist for a rafflesia bloom. |
Tocharian, also spelled Tokharian, is an extinct branch of the Indo-European language family. It is known from manuscripts dating from the 6th to the 8th century AD, which were found in oasis cities on the northern edge of the Tarim Basin (now part of Xinjiang in northwest China). The documents record two closely related languages, called Tocharian A (“East Tocharian”, Agnean or Turfanian) and Tocharian B (“West Tocharian” or Kuchean). The subject matter of the texts suggests that Tocharian A was more archaic and used as a Buddhist liturgical language, while Tocharian B was more actively spoken in the entire area from Turfan in the east to Tumshuq in the west. Tocharian A is found only in the eastern part of the Tocharian-speaking area, and all extant texts are of a religious nature. Tocharian B, however, is found throughout the range and in both religious and secular texts. (Source: Wikipedia)
The corpus module has a class for generating a Swadesh list for Tocharian B.
In : from cltk.corpus.swadesh import Swadesh In : swadesh = Swadesh('txb') In : swadesh.words()[:10] Out: ['ñäś', 'tuwe', 'su', 'wes', 'yes', 'cey', 'se', 'su, samp', 'tane', 'tane, omp'] |
This precursor of modern chess originated in the northern Indian subcontinent during the Gupta empire. 'Chaturanga' translates as 'the four divisions', meaning infantry, cavalry, elephantry, and chariotry. They were the four divisions of an Indian army at the time, and no doubt the game was thought of as a war-game. At that time the game had no female piece: what we now call the queen was called Mantri, meaning a minister or advisor.
From the beginning until the end of the 15th century the pieces we call bishops and queens had limited movement, and it took longer to get the pieces into action. In modern chess, the original pieces evolved into the modern pawn, knight, bishop, and rook, respectively.p173; p74 |
If you're a fan of habañero salsa or like to order Thai food spiced to five stars, you owe a lot to bugs, both the crawling kind and ones you can see only with a microscope. New research shows they are the ones responsible for the heat in chili peppers.
The spiciness is a defense mechanism that some peppers develop to suppress a microbial fungus that invades through punctures made in the outer skin by insects. The fungus, from a large genus called Fusarium, destroys the plant's seeds before they can be eaten by birds and widely distributed.
"For these wild chilies the biggest danger to the seed comes before dispersal, when a large number are killed by this fungus," said Joshua Tewksbury, a University of Washington assistant professor of biology. "Both the fungus and the birds eat chilies, but the fungus never disperses seeds – it just kills them."
Fruits use sugars and lipids to attract consumers such as birds that will scatter the seeds. But insects and fungi enjoy sugars and lipids too, and in tandem they can be fatal to a pepper's progeny.
However, the researchers found that the pungency, or heat, in hot chilies acts as a unique defense mechanism. The pungency comes from capsaicinoids, the same chemicals that protect them from fungal attack by dramatically slowing microbial growth.
"Capsaicin doesn't stop the dispersal of seeds because birds don't sense the pain and so they continue to eat peppers, but the fungus that kills pepper seeds is quite sensitive to this chemical," said Tewksbury, lead author of a paper documenting the research.
"Having such a specific defense, one that doesn't harm reproduction or dispersal, is what makes chemistry so valuable to the plant, and I think it is a great example of the power of natural selection."
The paper is published the week of Aug. 11 in the online Proceedings of the National Academy of Sciences. Co-authors are Karen Reagan, Noelle Machnicki, Tomás Carlo, and David Haak of the University of Washington; Alejandra Lorena Calderón Peñaloza of Universidad Autonoma Gabriel Rene Moreno in Bolivia; and Douglas Levey of the University of Florida. The work was funded by the National Science Foundation and the National Geographic Society.
The scientists collected chilies from seven different populations of the same pepper species spread across 1,000 square miles in Bolivia. In each population, they randomly selected peppers and counted scars on the outer skin from insect foraging. The damage was caused by hemipteran insects – insects such as seed bugs (similar to aphids and leaf hoppers) that have sucking mouth parts arranged into a beaklike structure that can pierce the skin of a fruit.
The researchers found that not all of the plants produce capsaicinoids, so that in the same population fruit on one plant could be hotter than a jalapeño while fruit from other plants might be as mild as a bell pepper. But there was a much-higher frequency of pungent plants in areas with larger populations of hemipteran insects that attack the chilies and leave them more vulnerable to fungus.
The scientists also found that hot plants got even hotter, with higher levels of capsaicinoids, in areas where fungal attacks were common. But in areas with few insects and less danger of fungal attack, most of the plants lacked heat entirely. In those areas, chilies from the plants that did produce capsaicinoids had a lot less kick because they only produced about half the capsaicinoids as the plants did in areas where fungal attack was common.
Using chemical substances as a defense is not unique to peppers. Tomatoes, for example, are loaded with substances that give their unripened fruit a decidedly unpleasant taste, allowing the seeds a chance to mature and be dispersed. But unlike peppers, tomatoes and most other fruits lose their chemical defenses when the fruit ripens. That is a necessary step, scientists believe, because otherwise the fruit would not be consumed by birds and other animals that disperse the seed. The problem with that strategy, Tewksbury said, is that it leaves the fruit exposed to fungal attack.
"By contrast, peppers increase their chemical defense levels, or their heat, as they ripen. This is a very different model and peppers can get away with it because birds don't sense pain when they eat capsaicin," Tewksbury said. "I think a lot of plants would love to come up with this way of stopping fungal growth without inhibiting dispersers. It's just very hard to do."
The fact that chilies have capsaicin could be the reason humans started eating the peppers in the first place, he said. Chili peppers and corn are among the earliest domesticated crops in the New World.
"Before there was refrigeration, it was probably adaptive to eat chilies, particularly in the tropics," Tewksbury said. "Back then, if you lived in a warm and humid climate, eating could be downright dangerous because virtually everything was packed with microbes, many of them harmful. People probably added chilies to their stews because spicy stews were less likely to kill them."
All chilies originated in South America, and wild chilies now grow from central South America to the southwestern United States. Explorers carried the plants back to Europe, but they were not widely used there. From Europe, chilies made their way to Asia and Africa, where they have become a common ingredient in nearly every tropical cuisine.
"In the north, any adaptive benefit to using eating chilies would be much smaller than at the equator because microbial infection of food is less common and it's easier to keep food cold. Maybe that's why food in the north can be so boring," Tewksbury said.
"Along the equator, without access to refrigeration, you could be dead pretty quickly unless you can find a way to protect yourself against the microbes you ingest every day."
For more information, contact Tewksbury at (206) 616-2129, (206) 331-1893 (cell) or firstname.lastname@example.org.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. |
Addison's disease: Causes, symptoms, diagnosis and treatment
What Is Addison's disease?
Addison’s disease, primary adrenal insufficiency or hypoadrenalism, is a condition affecting the adrenal glands.
A person with Addison’s disease doesn't produce enough of two important hormones: cortisol and aldosterone.
Symptoms of Addison’s disease include fatigue, muscle weakness, low mood, loss of appetite, unexpected weight loss and being thirstier than usual.
The condition is rare, affecting around 8,400 people in the UK.
Cortisol and aldosterone
Cortisol's most important function is to help the body respond to stress. It also helps regulate your body's use of protein, carbohydrates, and fat; helps maintain blood pressure and cardiovascular function; and control inflammation. Aldosterone helps your kidneys regulate the amount of salt and water in your body - the main way you keep your blood pressure under control. When aldosterone levels drop too low, your kidneys cannot keep your salt and water levels in balance. This makes your blood pressure drop.
There are two forms of Addison's disease. If the problem is with the adrenal glands themselves, it's called primary adrenal insufficiency. If the adrenal glands are affected by a problem starting somewhere else - such as the pituitary gland - it's called secondary adrenal insufficiency.
What causes Addison's disease?
When Addison's disease is the result of a problem with the adrenal glands themselves, it is called primary adrenal insufficiency. About 70% of the time, this happens because the body's self-defence mechanism - the immune system - mistakenly attacks the adrenal glands. This so-called "autoimmune" attack destroys the outer layer of the glands.
Long-lasting infections - such as tuberculosis, HIV, and some fungal infections - can harm the adrenal glands. Cancer cells that spread from other parts of the body to the adrenal glands can also cause Addison's disease.
Less commonly, Addison's disease is not due to the failure of the adrenal glands themselves. This condition, secondary adrenal insufficiency, can be caused by problems with the hypothalamus or pituitary gland, located in the centre of the brain. These glands produce hormones that act as a switch and can turn on or off the production of hormones in the rest of the body. A pituitary hormone called ACTH is the switch that turns on cortisol production in the adrenal gland. If ACTH levels are too low, the adrenal glands stay in the off position.
Another cause of secondary adrenal insufficiency is prolonged or improper use of steroid hormones such as prednisolone. Less common causes include pituitary tumours and damage to the pituitary gland during surgery or radiotherapy.
What are the symptoms of Addison's disease?
Over time, Addison's disease leads to these symptoms:
- Chronic fatigue and muscle weakness.
- Loss of appetite, inability to digest food, and weight loss.
- Low blood pressure (hypotension) that falls further when standing. This makes a person dizzy, sometimes to the point of fainting.
- Blotchy, dark tanning and freckling of the skin. This is most noticeable on parts of the body exposed to the sun, but also occurs in unexposed areas. Darkened skin is particularly likely to occur on the forehead, knees and elbows or along scars, skin folds, and creases (such as on the palms).
- Blood sugar abnormalities, including dangerously low blood sugar (hypoglycaemia).
- Nausea, vomiting, and diarrhoea
- Inability to cope with stress.
- Moodiness, irritability, and depression
- Increased thirst.
- Craving of salty foods.
Some of these symptoms may indicate conditions other than Addison's disease. If you experience any of the symptoms, talk with your doctor about whether Addison's disease or another condition may be the cause.
Because symptoms of Addison's disease progress slowly, they may go unrecognised until a physically stressful event, such as another illness, surgery, or an accident. All of a sudden, the symptoms may get much worse. When this happens, it's called an Adrenal crisis (Addisonian crisis). For one in four people with Addison's disease, this is the first time they realise they are ill. An adrenal crisis is considered a medical emergency because it can be fatal.
Symptoms of an adrenal crisis include: |
Enjoy these teacher resources from Blueberry Hill Books!
Use the sign-up form to the left to receive free downloadable materials as they become available! Check out the plan below to see how these free materials can be incorporated into a daily routine.
A Sample Routine for Emergent Readers
The materials and activities provided here are geared toward emergent readers at a grade one level, but many of them would be appropriate for children in kindergarten and grade two as well
What's the plan?
You have to know where you are going if you’re ever going to get there!
Select research-based goals/strategies for a two-week block, and then break them down into daily segments. See pg. 33 in the Level 1, 2, 3 Guidebook for a list of early reading strategies. The blockplan (pg. 56 – 67) provides a suggested progression, but you will need to adapt the instruction based on the needs of your students.
Research indicates that the necessary components of a successful literacy program are:
1. The explicit teaching of strategies as a main focus during guided reading, shared reading, partner reading, read-alouds, independent reading and writing activities.
2. Phonemic awareness and phonics instruction.
3. The development of a sight vocabulary
4. Vocabulary development
5. Writing instruction
6. Assessment-based instruction
Check out a Sample Daily Routine for Literary Instruction here.
Click on the images below to download these free resources. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
In this tutorial you will learn about Java Virtual Machine (JVM) architecture.
What is JVM?
Java Virtual Machine (JVM) is software or a virtual machine that takes .class file as an input and runs java bytecode. Java is machine independent but JVM is machine dependent.
What is Bytecode?
When java program is compiled using javac compiler, it is converted into intermediate code known as bytecode. The byte code is stored in .class file. Bytecode contains special instructions that are understandable by JVM. We can run this .class file on any platform, all we need is the JVM for that platform. This feature makes Java machine independent language.
Java Virtual Machine (JVM) Architecture
Phases of JVM
It has following four different phases.
- Load code
- Verify code
- Execute code
- Provide runtime environment
Internal Architecture of JVM
It takes .class file as input to load java byte code. Before loading it varies the code. If code is valid then memory for code will be allocated in different memory areas.
It is used to store class code, method code and static variables.
Heap is responsible for storing objects. It is also called as runtime data area.
Stack area is a combination of stack frames. Each stack frame stores the information related to a method. Like local variables, value returned by method, etc. When a method is invoked, a new frame is created. The frame is deleted when method execution completes.
It contains the address of next instruction to be executed.
Native Method Stack
It stores native methods used in the application. The non-java code is known as native code.
It consists of interpreter and just in time (JIT) compiler. The java code is executed by both of them simultaneously. It decreases execution time and hence increases the overall performance.
Comment below if you found any information incorrect or missing. |
Clean water is essential for drinking, as well as for use in food and beverage production. Only 3% of water in the world is fresh, but fresh water sources can contain chemical contaminants, like perfluorooctanoic acid (PFAS) and arsenic, as well as microbiological contaminants, and pathogens. Even after the treatment process, residual chemicals and contaminants in drinking water can exceed safe and legal levels. Municipalities, private water treatment stations, and food and beverage production facilities must regularly test their water supply to ensure chemical and microbiological contaminant levels are kept at or below compliance levels set by regulatory agencies. With a growing global population, the demand for fresh water is predicted to increase by 40% by 20301, With increased demand, maintaining efficiency and accuracy in water testing will provide a constant challenge.
Any industry that produces, uses, or processes drinking water must comply with national regulations and perform regular tests to ensure that drinking water is free of chemical and microbiological contaminations. Common chemical contaminants include aluminum, ammonium, bromate, iron, manganese, chloride, nitrate, nitrite, sulfate, chromium, and other metals.
Additional testing is performed when certain industrial environmental pollutants may be present. Volatile organic compounds (VOCs) are often used during manufacturing, including the production of petroleum products, adhesives, pharmaceuticals, paints, or refrigerants. VOCs are also used as gasoline additives, solvents, hydraulic fluids, and dry cleaning agents. VOC contamination is a human-health concern because many are toxic and are known or suspected human carcinogens.
Fresh water supplies can also be a source of pathogens, leading to the spread and transmission of diseases without microbiological control. Testing the water supply for every pathogens is expensive and time consuming, so indicator organisms are used as assays to detect fecal and other contaminations. Coliforms, Escherichia coli, and Klebsiella pneumoniae are typically used for testing for fecal contamination.
Microbiological testing of commercial beverages and water products is essential to ensure that they are safe to drink, but also to prevent spoilage. Bacteria, yeasts, molds, and pathogens that occur in raw materials or during the production process can decrease the quality and safety of finished products. Both municipal and commercial water supplies are frequently tested to meet regulatory requirements.
Disinfection of drinking water reduces the risk of pathogenic infections, but the process can leave behind residues and disinfection byproducts (DBPs) that can pose risk to human health. Additionally, organic and inorganic pollutants may be naturally present in our water supply at its source. The identification of pollutants and DBPs is essential to ensure safe consumption.
Since contaminated water is harmful to humans and the environment, regulatory agencies, like the U.S. Environmental Protection Agency (USEPA), require the use of official standardized methods (e.g., ISO, EPA, AOAC) for testing drinking water and wastewater. These techniques include quantitative testing, chromatography, spectrophotometry, reflectometry, physical parameter measurements (e.g., cloud point, color, hardness, pH, odor), and pathogen testing (molecular or cell culture-based). These quality control tests are done both at the input source, after in-process filtration steps like flocculation and clarification, and also at post-filtration outputs, including tap sampling.
To continue reading please sign in or create an account.Don't Have An Account? |
There's incredible value in giving your students challenging tasks. Home educators have a headache or need to take a sick day? Challenge tasks can be a way to provide yourself with the rest you need while giving your student something rigorous and engaging to do. Maybe your student is gifted and needs a bit more of a challenge during the regular learning time? Tasks like this can be a way for them to be very challenged. Maybe you're looking for a low-stress but highly engaging learning opportunity? Cue a challenging task!
Coming up with ideas and specific challenges can be difficult. You have to think of challenges, set the specifics, and do a bit of prep, even if it's minimal. Here are a few ideas to help give you a head start.
Egg Drop Challenge
Grade levels: 5+
Why it's a great idea: The egg drop challenge can be done independently or in small groups. Students don't need to be "on the same grade level," and the home educator doesn't need to have all the answers. In fact, not knowing what will work will actually help the students learn more through trial and error. Also, beyond the eggs, the "required list of materials" is relatively flexible. Cotton, tape, paper, boxes, string, even grass clippings.
How to prep ahead of time: The only thing you can't prepare is the eggs--you'll need some fresh uncooked eggs (or another small, fragile item) for this challenge. However, you can print out the directions (there are a few different ways to do this) for the challenge. Place the directions along with the materials in a shoebox or large bag and save it until the day you'll need it! For project extension, depending on the student's age you can add things like showing a space landing, reading Rosie Revere, Engineer by Andrea Beaty, or making connections between the body and challenge.
Straw Rocket Challenge
Grade levels: 3+
Why it's a great idea: This challenge is easy, stress-free, and can be fully prepared for a random day in the future. There isn't a huge list of required materials. Plus, it's fun!
How to prep ahead of time: This is as simple as printing out the directions, collecting the non-perishable materials, and placing them to the side until needed. If you have a student who might need more structure, NASA has a step-by-step option. In addition, you can extend learning by watching and discussing rocket launches, reading up on basic rocket science, or studying the design of rockets.
Cooking without a stove
Grade levels: 6+
Why it's great: Students can actually solve this challenge in a few different ways. That means you have a lot of flexibility on what you require and what materials are needed.
How to prep ahead of time: First, be sure you remember food safety rules--your students shouldn't be challenged to cook things like raw meat. Decide what method you'd like your student to try, and provide them with the materials. Your student can make a solar-powered pizza hot box using an old satellite dish, solar visor, or a Pringles can. Extend the learning by doing more than just thinking about how solar ovens work. Challenge your student to think about how this technology could be applied on a larger scale to fill the needs of others.
Can you think of other challenge ideas that kids can conduct? What are they? Can they be prepared ahead of time? |
Potential first traces of the universe's earliest stars uncovered
Astronomers may have discovered the ancient chemical remains of the first stars to light up the universe. Using an analysis of a distant quasar observed by the 8.1-meter Gemini North Telescope, located on Hawaii, the scientists found an unusual ratio of elements that, they argue, could come only from the debris produced by the all-consuming explosion of a 300-solar-mass first-generation star. The work was supported by the U.S. National Science Foundation. Gemini North is operated by NSF's NOIRLab.
The very first stars likely formed when the universe was only 100 million years old, less than 1% of its current age. These first stars were so massive that, when they ended their lives as supernovae, they tore themselves apart and seeded interstellar space with a distinctive blend of heavy elements.
By analyzing one of the most distant known quasars using the Gemini North Telescope, one of the two identical telescopes that make up the International Gemini Observatory, astronomers believe they have identified the remnant material of the explosion of a first-generation star. Using an innovative method to deduce the chemical elements contained in the clouds surrounding the quasar. They noticed a highly unusual composition — the material contained over 10 times more iron than magnesium compared to the ratio of these elements found in the sun.
During their research, the astronomers studied results from a prior observation taken by the 8.1-meter Gemini North Telescope using the Gemini Near-Infrared Spectrograph. A spectrograph splits the light emitted by celestial objects into its constituent wavelengths, which carry information about which elements the objects contain. Gemini is one of the few telescopes of its size with suitable equipment to perform such observations.
Two co-authors of the analysis, Yuzuru Yoshii and Hiroaki Sameshima of the University of Tokyo, have developed a method of estimating the abundance of the elements, enabling them to discover the conspicuously low magnesium-to-iron ratio.
If this is indeed evidence of one of the first stars, the discovery will help explain how matter in the universe evolved into what it is today, including in humans. To test this interpretation more thoroughly, many more observations are required to see if other objects have similar characteristics.
Astronomers might be able to find the chemical signatures of long-gone supernova explosions still imprinted on objects in the universe.
"We now know what to look for, we have a pathway," said co-author Timothy Beers, an astronomer at the University of Notre Dame. |
Human Blood Circulatory System Definition:
The human blood circulatory system functions to transport blood and oxygen from the lungs to the various tissues of the body.
The heart, the lungs, and the blood vessels work together to form the circle part of the circulatory system.
The heart is one of the most important organs in the entire human body and human blood circulatory system. It is really nothing more than a pump, composed of muscle which pumps blood throughout the body, beating approximately 72 times per minute of our lives. It contains four chambers: two atria and two ventricles. Oxygen-poor blood enters the right atrium through a major vein called the vena cava. The blood passes through the tricuspid valve into the right ventricle.
Next, the blood is pumped through the pulmonary artery to the lungs for gas exchange. Oxygen-rich blood returns to the left atrium via the pulmonary vein. The oxygen-rich blood flows through the bicuspid (mitral) valve into the left ventricle, from which it is pumped through a major artery, the aorta. Coronary arteries supply the heart muscle with blood
Sinoatrial Node is called the pacemaker.
Blood is the medium of transport in the body. The fluid portion of the blood, the plasma, is a straw-colored liquid composed primarily of water. All the important nutrients, the hormones, and the clotting proteins as well as the waste products are transported in the plasma. Red blood cells and white blood cells are also suspended in the plasma. Plasma from which the clotting proteins have been removed is serum.
Red Blood Cells (RBC)
- Red blood cells are also called erythrocytes.
- These are disk-shaped cells produced in the bone marrow.
- Red blood cells have no nucleus, and their cytoplasm is filled with hemoglobin.
- Hemoglobin is a red-pigmented protein that binds loosely to oxygen atoms and carbon dioxide molecules.
- A red blood cell circulates for about 120 days and is then destroyed in the spleen.
- When the red blood cell is destroyed, its iron component is preserved for reuse in the liver.
- The remainder of the hemoglobin converts to bilirubin.This amber substance is the chief pigment in human bile, which is produced in the liver.
- Red blood cells commonly have immune-stimulating polysaccharides called antigens on the surface of their cells.
- Individuals having the A antigen have blood type A (as well as anti-B antibodies); individuals having the B antigen have blood type B (as well as anti-A antibodies); individuals having the A and B antigens have blood type AB (but no anti-A or anti-B antibodies); and individuals having no antigens have blood type O (as well as anti-A and anti-B antibodies).
White Blood Cells (WBC)
- White blood cells are referred to as leukocytes.
- They are generally larger than red blood cells and have clearly defined nucleus.
- They are also produced in the bone marrow and have various functions in the body.
- Certain white blood cells called lymphocytes are essential components of the immune system.
- Neutrophils and monocytes function primarily as phagocytes; that is, they attack and engulf invading microorganisms.
- About 30 percent of the white blood cells are lymphocytes, about 60 percent are neutrophils, and about 8 percent are monocytes.
- The remaining white blood cells are eosinophils and basophils.Their functions are uncertain; however, basophils are believed to function in allergic responses.
- Platelets are small disk-shaped blood fragments produced in the bone marrow.
- They lack nuclei and are much smaller than erythrocytes. Also known technically asthrombocytes, they serve as the starting material for blood clotting. |
Building student understanding across racial differences
Can you imagine a friendship that developed between Emma, a black second-grade student who attended a large urban school in Memphis, Tennessee, and Cassidy, a white third-grade student in a rural school in New Jersey? These two girls came from different backgrounds and lived in communities where they had little to no interactions with any other races or cultures. Based on a project I helped organize, they were paired as buddies and were provided opportunities to communicate one-on-one, becoming fast friends. Without knowing it, the friendship they developed—and still maintain—helped chip away at the racial and cultural barriers that continue to divide our society.
For many years, I grappled with how to create racially diverse classroom experiences for my students. I want my students to experience what I did during my childhood living in the very same neighborhood. My father chose this community for our family because of its access to the best schools, teachers, and resources in Memphis. But when I was young, my community looked very different than it does today—I did not have many neighbors who looked like me; nearly everyone was white. Because of the area’s changing demographics, my students now have the opposite experience: They typically have little to no interactions with people of other races—especially people who are white. Recognizing how limited exposure to people of different races can contribute to racism and bias, I’ve sought ways to bridge the distance so that students like Emma and Cassidy learn to understand and appreciate others.
With the protests raging around the country right now, it is more apparent than ever that there needs to be a better understanding of racism and what it means in America. This starts in our classrooms and in our homes when children are young. As teachers, we need to work harder to ensure that our students make connections with people of other races. I believe that teaching our children about differences will help empower the next generation of dreamers and doers, and they will be the cure to end racism. I’ve tried to bridge those gaps in my classroom by connecting my students with the local community and leveraging technology to connect them with the world at large.
FOSTERING CROSS-CULTURAL CONNECTIONS
Though technology can be isolating, teachers can also use tech tools to connect classrooms and foster relationships between students. For the past three years, I have used a free online platform called Empatico to partner my second-grade class with another class in a different state. The platform also features lessons that help students develop social and emotional skills like communication and empathy.
After realizing that two New Jersey co-teachers, Michael Dunlea and Stacey Delaney, taught a class of all white students (and my Tennessee class had all black students), we recognized that we had an amazing opportunity to partner and provide our classes with a safe space to ask questions, show their creativity, and demonstrate understanding toward others.
Early on, we spent some time discussing the definitions of compassion, empathy, and respect, and students took time exhibiting these traits in discussions with their peers as practice. Then, we organized dual-class lessons through FaceTime; our students also exchanged letters and gifts; and simultaneously, they read the same books—like biographies of famous black leaders and one on Ruby Bridges—to compare insights. The positive relationship that developed between our classes was so strong that I decided to visit New Jersey in person several times, and Michael and Stacey surprised us once at the Civil Rights Museum.
CONNECTING WITH THE LOCAL COMMUNITY
Taking the classroom experiences with our partner class as a foundation, I sought out field experiences and community speakers to help my students connect classroom lessons to larger issues and concepts. Every summer, I go to Little Rock Central High School (the first integrated high school in Arkansas, where the Little Rock Nine attended) to reaffirm to myself why I teach. During one visit, I befriended park ranger Toni and asked her to speak to my class and our partner class in New Jersey. On a Zoom video call, Toni discussed the Little Rock Nine, the civil rights movement, and segregation. Through this, the students developed a deeper understanding of the injustices and inequalities that the black race has experienced.
BROADENING WORLD VIEWS
To encourage students to expand their understanding of race and difference even further, I collaborated with educator Jennifer Williams, a global educator and author of Teach Boldly, to connect with other educators from other countries such as China, Bangladesh, and Nigeria. Through these connections, I’m able to broaden my classroom lessons with a global context and give my students more knowledge of how they can contribute to building a better world.
My class and our New Jersey partner class did a unit together on festivals around the world, where they identified commonalities and differences. My students also joined other students in the first-ever global “Peace Sign Project.” Students around the globe created signs, read books and articles about peace, and led a peace march at their schools or communities. Then my students led the first-ever children’s march at the National Civil Rights Museum on what would have been Dr. King’s 90th birthday. With the relationship with their New Jersey friends as a foundation, my students learned to impact change in the world by taking a stand on issues that matter.
It’s now been three years since Michael, Stacey, and I first partnered together, and we’ve continued the relationship with each new class. Through these impactful experiences we’ve had together, my students have shown both academic and social and emotional growth. They want to engage in more positive discourse—with each other and with people who are different. They ask probing questions to seek understanding from peers, and they aren’t afraid to talk about race and build friendships across racial differences. I’ve heard the same from Michael and Stacey.
I will never forget the day when one of my students asked me if racism had ended. I asked her why she thought racism had ended. She said, “I have white friends in New Jersey. They love and care about me.” I wanted her to have hope, so I responded, “Racism has not ended, but it can end with you.” |
When the Andromeda galaxy collides with our own, beloved Milky Way, there will be a period of, shall we say, readjustment. “It is likely the sun will be flung into a new region of our galaxy,” NASA scientists working with Baltimore’s Space Telescope Science Institute predicted this week. “The Milky Way has had a lot of small mergers, but this indeed will be unprecedented,” Johns Hopkins astronomer Rosemary Wyse explains. And this isn’t a what-if situation; the NASA crew is throwing around phrases like “predict with certainty,” which they don’t do lightly.
That’s the bad news. The good news is, earth won’t necessarily be destroyed! Stars inside both galaxies are widely spaced enough that the odds of a head-on smash up is unlikely. But all the usual orbits we know and love will be shaken up; our own solar system will probably be flung much farther out from the galactic core than it is today. With its stars jostled out of their mostly-circular orbits, the Milky Way will no longer resemble a flattened pancake; instead, the post-collision galaxies will merge into a huge elliptical galaxy, whose bright core will dominate the night sky.
(In case all this talk of galaxy-smashing makes you nervous, we should point out that by the time this happens — in billions of years — the sun will have probably become a red giant and engulfed the Earth already, anyways.)
So if you’re feeling upset about something petty today, just meditate on these two sentences: “The universe is expanding and accelerating, and collisions between galaxies in close proximity to each other still happen because they are bound by the gravity of the dark matter surrounding them. The Hubble Space Telescope’s deep views of the universe show such encounters between galaxies were more common in the past when the universe was smaller.” Kinda puts everything into perspective, huh?
Some beautiful NASA pictures of galaxy collisions below:
Per NASA, this is what it might look like:
- First Row, Left: Present day.
- First Row, Right: In 2 billion years the disk of the approaching Andromeda galaxy is noticeably larger.
- Second Row, Left: In 3.75 billion years Andromeda fills the field of view.
- Second Row, Right: In 3.85 billion years the sky is ablaze with new star formation.
- Third Row, Left: In 3.9 billion years, star formation continues.
- Third Row, Right: In 4 billion years Andromeda is tidally stretched and the Milky Way becomes warped.
- Fourth Row, Left: In 5.1 billion years the cores of the Milky Way and Andromeda appear as a pair of bright lobes.
- Fourth Row, Right: In 7 billion years the merged galaxies form a huge elliptical galaxy, its bright core dominating the nighttime sky. |
As the Earth’s nearest neighbour, Venus is one of the easiest planets to spot in the night sky. Named after the Roman goddess of love, it shines resplendently, often just after sunset or before sunrise. This week, it will be particularly bright, as it reaches its greatest brightness in the dawn sky of the 8th. So, what makes Venus stand out?
Venus in History
As a planet visible with the naked eye, Venus has been known to humans since antiquity. It was not, however, always known to be a planet. Early on, the word ‘star’ was attributed to all points of light in the sky, regardless of their modern day classification. There was, however, different types of star. Planets such as Venus were given the name ‘wandering stars’. This was due to their movement. Planets often appear to move independently to the stars behind them.
The Ancient Egyptians thought that Venus was in fact two different objects: the ‘Morning Star’ and the ‘Evening Star’. Because Venus’ orbit is closer to the Sun than ours, we only ever see the planet in a similar direction to the Sun. Therefore, it appears shortly after sunset or just before sunrise, close to the Sun in the sky. This belief permeated other civilisations, like the Ancient Greeks, as well.
It wasn’t until a Greek mathematician, Pythagoras, took a closer look at the two stars that he realised they were in fact the same object: Venus. Many years later, the famous astronomer, Galileo Galilei, pointed his newly-built telescope towards the planet. He discovered that it had phases, similar to the Moon. This was an incredible discovery that indicated that Venus moved around the Sun, and not around the Earth as previously thought.
The first spacecraft to successfully land on Venus was the Venera 7 lander, launched by the Soviet Union in 1970. This also marked the first successful landing on another planet. When it reached the surface, the signal turned to noise. But by looking at it closer, the scientists managed to access information from a signal at 1% the strength it had before landing. Venera 7 managed to survive temperatures of 465°C and pressure of 90 atmospheres (that’s 90 times our own!) for 23 minutes before its batteries failed.
What is Venus like?
Venus has the densest atmosphere in the whole of the solar system. This means that it is constantly covered in a thick layer of cloud, making it all but impossible for instruments to see down to the surface. This mystery gave rise to all sorts of theories about what the surface of Venus was like – from scientists and science fiction writers alike! People liked to imagine that Venus was a tropical world, full of jungles and rainforests. Perhaps even somewhere you could go on holiday in the distant future!
Unfortunately, this is far from the case. Venus’ dense atmosphere contributes to what is essentially a runaway greenhouse effect. In the same way that climate change is heating up our own atmosphere, massive amounts of carbon dioxide trap heat beneath the clouds. This leads to incredibly high temperatures and pressures. Just being on the surface is enough to melt lead!
Other than the inhospitable conditions, Venus is the Earth’s twin in many ways. It has a similar size and density, and orbits the Earth at 0.7 AU (1 AU is the distance between the Earth and the Sun). One year, or the time it takes to orbit the Sun, is 225 Earth days long. However, this only amounts to just under 2 Venusian days. Venus is one of two planets in the solar system that rotates from east to west, the other being Uranus. This is the opposite direction to the direction that it is moving around the Sun. Coupled with its slow rotation speed, this means that one day-night cycle takes 117 Earth days.
How to see it
Venus is an easy object to see with the naked eye. Being our nearest neighbour, it is the brightest object in the sky, except the Sun and Moon. At the moment, you can see it in the morning sky just before sunrise, shining in the east. It appears in the constellation of Taurus the Bull.
On the morning of the 8th, Venus appears at its greatest brightness, of -4.5 magnitude (lower magnitudes are brighter). This is when it will be best viewed, and can even cast a shadow without interference from artificial lights or the Moon. Then, on the 10th, it will reach aphelion, its furthest point from the Sun. However, this will not impact on its visibility from Earth.
Keep an eye out for the Morning Star this week! |
Normal 3 D vision works because our left eye has a slightly different view of the world from our right eye.
3 D glasses typically have the left-hand optic covered in red and the right hand-optic covered in blue. The 3-D material – a picture or a single frame of a film – has two images printed, one in red and one in blue. When we view this image through the glasses our left eye mainly sees the image printed in red and our right eye mainly sees the image printed in blue. Our brain interprets the differences between the different images as being due to differences n perspective and reconstructs a 3-D view of the scene. |
On Tuesday, during a teleconference with the media, NASA announced the discovery of 1,284 exoplanets by the Kepler Space Telescope—essentially doubling the number of these known mysterious worlds that exist outside of our Solar System.
Based on their perceived size, NASA says over 550 of the planets announced today are estimated to be rocky planets. Of those rocky planets, Kepler has identified 21 planets located in the habitable zone, including 9 in today’s discovery. Since the only example we have of life in the Universe is on Earth, which is also located in the habitable zone, these planets could potentially harbor life.
“Before the Kepler space telescope launched, we did not know whether exoplanets were rare or common in the galaxy. Thanks to Kepler and the research community, we now know there could be more planets than stars,” said Paul Hertz, Astrophysics Division director at NASA Headquarters. “This knowledge informs the future missions that are needed to take us ever-closer to finding out whether we are alone in the universe.”
Kepler is transforming the way we view the Universe. Launched in 2009, the space observatory has been busy searching the Milky Way Galaxy for Earth-sized planets that exist in a cosmic sweet spot around its host star, dubbed the ‘Goldilocks zone’. This refers to the region around a star that’s “just right” for the presence of liquid water and potentially life.
Kepler’s main goal is to determine how many Earth-sized planets orbit the habitable zone of Sun-like stars. “This work will help Kepler reach its full potential,” explains Natalie Batalha, a Kepler mission scientist at NASA’s Ames Research Center, “by providing a deeper understanding of the number of stars that harbor potentially habitable, Earth-size planets—a number that’s needed to design future missions to search for habitable environments and living worlds.”
Last July, The SETI Institute and NASA announced the discovery of the most Earth-like planet ever discovered, Kepler 452b. The planet is 1,400 light-years away from Earth and orbits a star referred to as our Sun’s cousin. The distance between Kepler 452b and its parent star is very similar to the distance of Earth’s orbit around the Sun. The exoplanet—which is about 60% larger than our planet, is thought to be rocky with an atmosphere and some amount of water.
Kepler monitored one patch of sky for four straight years, observing 150,000 stars, and determined that many of them host a variety of planets; most of which do not have an analogue within our Solar System. Kepler’s difficult mission is to separate the rocky, terrestrial planets—ranging in size from half the size of Earth to twice its size—from the rest. Scientists are also hoping to determine what percentage of the stars in our galactic backyard may have Earth-sized planets in or near the habitable zone.
The Kepler Telescope as a space observatory, identifies these worlds using the ‘transit method’—a technique of observing the periodically dimming light of stars in its line-of-sight. When a planet crosses in front of a star, it blocks out a portion of the star’s light, this is called a ‘transit’. This same dimming is what Kepler sees as exoplanets orbit their host stars. On Monday, Mercury’s transit of the Sun was visible from Earth
During Kepler’s original mission, the telescope discovered approximately 4,302 potential exoplanets. With today’s announcement, nearly half of those (over 2,000) have been verified as planets.
In a paper published today in the Astrophysical Journal, a research team led by Timothy Morton of Princeton University made the discovery and described how they were able to weed out 428 planetary impostors or false positives.
Not every dip in a star’s light is caused by a planet, there are certain astrophysical phenomena such as lower mass stars (like brown dwarfs) that can masquerade as planets in the data, so follow up observations are needed to weed out false positives. This process is lengthy, requiring a lot of time and resources to complete.
This discovery is based on statistical analysis that can be used to examine many planets at the same time. Morton used this method to determine how likely it was that each candidate was in fact a planet.
The team says that based on Kepler data, there could be over 10 billion rocky planets orbiting in the habitable zone in the entire galaxy.
Robin Seemangal focuses on NASA and advocacy for space exploration. He was born and raised in Brooklyn, where he currently resides. Find him on Instagram for more space-related content: @not_gatsby.
Amy Thompson is a science writer, focusing on all things space. You can find her on Twitter @astrogingersnap. |
What is a gateway? Definition and meaning
The term gateway has many different meanings, depending on whether you are talking in a business, IT, or other context. ‘IT‘ stands for Information Technology. Gateways, in information technology, are hardware devices or software that connect to different computer networks. Also in IT, gateways could be computers systems on Earth that switch voice and data signals between terrestrial networks and satellites.
The term may refer to any point or passage through which people may enter a region. For example, “New York is the gateway to America.” In architecture, gateways are passages or entrances that people enter by opening a gate.
Doorways surround doors. Likewise, gateways surround gates.
Gateways are also mechanisms or agencies that provide access to information, systems, or other agencies.
When we say “Hard work and dedication are the gateways to success,” the term means ‘ways of achieving something.”
If I say “My local bank is my gateway to a wide range of financial services,” what does the term mean? It means something within a system that allows me to use other parts of it.
“A gateway is a network node that connects two networks using different protocols together.”
“While a bridge is used to join two similar types of networks, a gateway is used to join two dissimilar networks.”
A gateway – telecommunications
Gateways, in telecommunications, are pieces of networking hardware that interface with other networks. Specifically, they interface with other networks that use different protocols.
Telecommunications or telecom means communicating over long distances; usually using hi-tech equipment.
Gateways may contain rate converters, fault isolators, signal translators, impedance matching devices, or protocol translators. The devices provide what is necessary for system interoperability.
Protocol translation/mapping gateways allow different networks to communicate. They perform the necessary protocol conversions. Hence, we also call them protocol converters.
Their activities are more complex than those of switches or routers because they communicate using two or more protocols.
B2B Gateways integrate data from back-end systems. They enable information exchange across trading entities or partners. B2B stands for business-to-business, i.e., companies doing business with other companies rather than individual consumers.
Payment gateways provide means of authorizing electronic payments. The payer either performs these e-payments online or offline.
Payment gateways claim to protect sensitive personal data such as credit card and pin numbers. In other words, they protect the data when they transmit to payment processors and merchants.
When you pay for something in a shop using your credit card, you place your card in or on a device. We call these devices payment gateways.
Business Gateway – Scotland
In Scotland, the Business Gateway is a service that contributes to the economic well-being of the country. The publicly-funded service provides access to free business support services.
According to Business Gateway:
“We give assistance and impartial advice to people starting or growing their business.” |
While much of African American historical research and interpretation regarding the 19th and early 20th centuries “Back to Africa” movement has focused largely on the efforts of the American Colonization Society or the Pan-Africanism effort of Marcus Garvey in the 1920’s, few would recognize the earliest black sponsored and organized efforts to return African people of diaspora back to their ancestral lands originated in 1780 in Newport, Rhode Island.
The American Colonization Society is history’s most prominent organization to support the return of free African Americans to what was considered greater freedom in Africa. The Society also helped to found the colony of Liberia in 1821–22 during the Presidency of James Monroe, whose capital, Monrovia, is named in his honor. But the white political and class elite of early America had less of an interest in returning Africans back to the place they were illicitly taken, but more as a means to reduce the real and perceived threat of the fast-growing population of free African Americans. In the minds of most whites during the early part of the 19th century, free African Americans would either compete for jobs with newly arriving immigrants in the cities of the North and or instill insurrection with the slaves on the plantations of the South. The conventional reasoning at the time supported the concept; by returning free African Americans back to Africa, America would remove the growing economic threat along with the moral guilt of former slaves.
In 1780, in Newport, Rhode Island then one of America’s most active slave ports, a group of enslaved and free African men came together to organize and charter the Free African Union Society. As the first African mutual aid society in America, the Society’s lofty mission included monetary support for African families in financial need, a burial society to ensure proper interments, education of its youth, setting moral and religious standards for its members in the larger community and most importantly, raising consciousness and funds within the African community to someday return to Africa. The most significant difference between the back to Africa efforts of the black led African Union Society as compared to the white led American Colonization Society; African Americans believed their socioeconomic futures would best be realized with returning to their ancestral homes, while whites believed the socioeconomic future of America would best be realized with removing free Africans from the new and expanding nation.
On January 4th, 1826 sailing on the brig Vine, Occramar Marycoo (aka Newport Gardner) and Salmar Nubia, two of the original founders of the African Union Society, and at the advanced ages of eighty and seventy, left Boston harbor with twenty-two additional Africans from Newport to establish a colony in Liberia. They arrived on February 6th to great welcome and fanfare, but much of the party succumb to fever and died within a year. Triumphantly, they had not died in a land where men were held as slaves. They died free in their own land. |
Popular Science Monthly/Volume 43/May 1893/Growth of our Knowledge of the Deep Sea
|GROWTH OF OUR KNOWLEDGE OF THE DEEP SEA.|
CHIEF OF THE DIVISION OF CHART CONSTRUCTION, UNITED STATES HYDROGRAPHIC OFFICE.
BEFORE the time of the project for the Atlantic telegraph cable in 1854, there seemed to be no practical value attached to a knowledge of the depths of the sea, and, beyond a few doubtful results obtained for purely scientific purposes, nothing was clearly known of bathymetry, or of the geology of the sea bottom. The advent of submarine cables gave rise to the necessity for an accurate knowledge of the bed of the ocean where they were laid, and lent a stimulus to all forms of deep-sea investigation. But although our extensive and accurate knowledge of the deep sea is of so late an origin, the beginnings of deep-sea research date far back into antiquity. The ancients can not be said to have had any definite conceptions of the deep sea. Experienced mariners, like the Phœnicians and Carthaginians, must necessarily have possessed some knowledge of the depths of the waters with which they were familiar, but this knowledge, whatever its extent, has now passed away. To the writings of Aristotle, who lived during the fourth century b. c., are credited the first bathymetric data. He states that the Black Sea has whirlpools so deep that the lead has never reached the bottom; that the Black Sea is deeper than the Sea of Azov, that the Ægean is deeper than the Black Sea, and that the Tyrrhenian and Sardinian Seas are deeper than all the others. The first record of a deep-sea sounding should be credited to Posidonius, who stated, about a century b. c., that the sea about Sardinia had been sounded to a depth of one thousand fathoms. No account is given of the manner in which the sounding was taken, and we have no information as to the methods employed by the ancients in these bathymetric measurements.
The opinions of the learned with respect to the greatest depth of the sea, in the first and second centuries a. d., may be gleaned from the writings of Plutarch and Cleomedes, the first of whom says, "The geometers think that no mountain exceeds ten stadia [about one geographic mile] in height, and no sea ten stadia in depth." And the second: "Those who doubt the sphericity of the earth on account of the hollows of the sea and the elevation of the mountains, are mistaken. There does not, in fact, exist a mountain higher than fifteen stadia, and that is also the depth of the ocean."
There was no important addition to our knowledge of the deep sea during the middle ages, and no definite attempt to provide effective means for deep-sea sounding appears to have been made until Nicolaus Causanus, who lived in the first half of the fifteenth century, invented an apparatus consisting of a hollow sphere, to which a weight was attached by means of a hook, intended to carry the sphere down through the water with a certain velocity. On touching the ground the weight became detached and the sphere ascended alone. The depth was calculated from the time the sphere was under water. This apparatus was afterward modified by Plücher and Alberti, and, in the seventeenth century, by Hooke, who substituted a piece of light wood well varnished over for the hollow sphere. Hooke's instrument was no doubt fairly accurate in shallow water, but useless in great depths, where the enormous pressure waterlogged the wood and, by materially increasing its density, greatly diminished the speed with which it rose from the bottom. When used in currents the float was carried away and the record lost.
During the period when the voyages of Columbus, Vasco da Gama, and Magellan added a hemisphere to the chart of the world and forever established the fundamental principles of all scientific geography, navigators had sounding lines of one hundred and two hundred fathoms in length, and, although they eagerly studied the oceanic phenomena revealed at the surface, the deep sea did not engage their attention. Kircher, in his Mundus Subterraneus, gives the ideas as to the depths of the sea that were accepted in the first half of the seventeenth century, stating that "in the same manner as the highest mountains are grouped in the center of the land, so also should the greatest depths be found in the middle of the largest oceans; near the coasts with but slight elevations the depth will gradually diminish toward the shore. I say coasts with but slight elevations, for, if the shores are surrounded by high rocks, then greater depths are found. This is proved by experience on the shores of Norway, Iceland, and the islands of Flanders."
Several soundings were taken in deep water during the eighteenth century, but they were not of much value. The first at all reliable were made by Sir John Ross during his well-known arctic expedition in 1818. He brought up six pounds of mud from 1,050 fathoms in Baffin Bay, and obtained correct soundings in 1,000 fathoms in Possession Bay, finding worms and other animals in the mud procured. Sir James Clark Ross, during his antarctic expedition from 1839 to 1843, obtained satisfactory soundings of 2,425 and 2,677 fathoms in the South Atlantic, with a hempen cord. He also dredged successfully in depths of 400 fathoms.
Meanwhile, about the middle of the eighteenth century, the first definite ideas about the formation of the bottom soil began to be advanced, although there had been speculations on the formation of alluvial layers since the time of Herodotus. In 1725 Marsilli made a few observations on the bathymetric knowledge then possessed concerning the nature of the bottom of the sea. He admitted that the basin of the sea was excavated "at the time of the creation out of the same stone which we see in the strata of the earth, with the same interstices of clay to bind them together," and pointed out that we should not judge of the nature of the bottom of the basins by the materials which seamen bring up in their soundings. The dredgings almost always indicate a muddy bottom, and very rarely a rocky one, because the latter is covered with slime, sand, and sandy, earthy, and calcareous concretions, and organic matter. These substances, he said, conceal the real bottom of the sea, and have been brought there by the action of the water. Lastly, by way of explanation, he compared the bed of the sea to the inside of an old wine cask, which seems to be made of dregs of tartar although it is really of wood.
Donati's studies on the bottom of the Adriatic Sea led him to announce, about the middle of the eighteenth century, that it is hardly different from the surface of the land, and is but a prolongation of the superposed strata in the neighboring continent, the strata themselves being in the same order. The bottom of this sea is, according to him, covered with a layer formed by crustaceans, testaceans, and polyps, mixed with sand, and to a great extent petrified. This crust may be seven or eight feet deep, and he attributed to this deposit, bound together with the remains of organisms and sedimentary mineral matter, the rising of the bottom of the sea, and the encroachment of the water on the coasts.
In 1836 Ehrenberg produced the first of a long series of publications relating to microscopic organisms which distinguished him as a naturalist of rare sagacity. He devoted the whole of his life to the study of microscopic organisms, to the examination of materials brought up from deep-sea soundings, and to all questions appertaining to the sea. Having discovered that the siliceous strata known as tripoli, found in various parts of the globe, are but accumulations of the skeletons of diatoms, sponges, and radiolaria, and having found living diatoms and radiolaria on the surface of the Baltic of the same species as those found in the Tertiary deposits of Sicily, and having shown that in the diatom layers of Bilin in Bohemia the siliceous deposit had, under the influence of infiltrated water, been transformed into compact opaline masses, he concluded that rocks like those which play so important a part in the terrestrial crust are still being formed on the bottom of the sea.
The investigation of the distribution of marine animals according to the depths of the sea may be said to have commenced in 1840 with Forbes's studies in the Mediterranean. He maintained that the dredgings showed the existence of distinct regions at successive depths, having each a special association of species; and remarks that the species found at the greatest depths are also found on the coast of England—concluding, therefore, that such species have a wider geographical distribution. He divided the whole range of depth occupied by marine animals into eight zones, in which animal life gradually diminished with increase of depth, until a zero was reached at about three hundred fathoms. He also supposed that plants, like animals, disappeared at a certain depth, the zero of vegetable life being at a less depth than that of animal life.
It has already been mentioned that probably the first reliable deep-sea soundings ever made were by Sir John Ross in 1818. To him is due the invention of the so-called deep-sea clam, by means of which specimens of the bottom were for the first time brought up from great depths in any quantity. This instrument was in the form of a pair of spoon-forceps, kept apart while descending, but closed by a falling weight on striking the bottom. Two separate casts were usually made, one to ascertain the depth and the other to bring up a specimen of the bottom soil.
For the development of accurate knowledge of the depths of the sea the world will ever be indebted to the genius of Midshipman Brooke, of the United States Navy, who made the first great improvement in deep-sea sounding in 1854 by inventing a machine in which, applying Causanus's idea of disengaging a weight attached to the sounding line, the sinker was detached on striking the bottom and left behind when the tube was drawn up. The arrangement of the parts is shown in the accompanying figure. When the tube B strikes the bottom, the lines A A slack and allow the arms C C to be pulled down by the weight D. When these arms have reached the positions indicated by the dotted lines, the slings supporting the weight have slipped off, and the tube can be hauled up, bringing within it a specimen of the bottom. This implement has been improved from time to time by various officers of our own and foreign navies by changing the manner of slinging and detaching the sinker, and by adding valves to the upper and lower ends of the tube to prevent the specimen from being washed out during the rapid ascent which has been rendered possible by the use of wire sounding line and steam hoisting engines; but in all the essential features it is the same as the most successful modern sounding apparatus. The impulse given to deep-sea sounding by Brooke was seconded by the successful adaptation of pianoforte wire to use as a sounding line, in 1872, by Sir William Thomson; and within recent years soundings have been taken far and wide in all the seas by national vessels during their cruises, by vessels engaged in laying submarine cables, and by various specially organized expeditions, among which that known as the Challenger Expedition, sent out by the Government of Great Britain during the period from 1873 to 1876, stands pre-eminent. As a result of this work many of the questions which perplexed the naturalists of the middle of the present century have now been cleared away.
Many of the specimens of the bottom that were brought up in the early days of deep-sea sounding were studied through the microscopes of Ehrenberg, of Berlin, and Bailey, of West Point. Maury, who believed that there are no currents and no life at the bottom of the sea, wrote: "They all tell the same story. They teach us that the quiet of the grave reigns everywhere in the profound depths of the ocean; that the repose there is beyond the reach of wind; it is so perfect that none of the powers of earth, save only the earthquake and volcano can disturb it. The specimens of deep-sea soundings are as pure and as free from the sand of the sea as the snowflake that falls when it is calm upon the lea is from the dust of the earth. Indeed, these soundings suggest the idea that the sea, like the snow cloud with its flakes in a calm, is always letting fall upon its bed showers of these microscopic shells; and we may readily imagine that the 'sunless wrecks' which strew its bottom are, in the process of ages, hid under this fleecy covering, presenting the rounded appearance which is seen over the body of a traveler who has perished in the snowstorm. The ocean, especially within and near the tropics, swarms with life. The remains of its myriads of moving things are conveyed by currents, and scattered and lodged in the course of time all over its bottom. The process, continued for ages, has covered the depths of the ocean as with a mantle, consisting of organisms as delicate as the macled frost and as light as the undrifted snowflake. of the mountain."
Maury was right in respect to the covering of the bed of the deep sea, for, as a result of all our researches, it is found that in waters removed from the land and more than fourteen hundred fathoms in depth there is an almost unbroken layer of pteropod, globigerina, diatom, and radiolarian oozes, and red clay which occupies nearly 115,000,000 of the 143,000,000 square miles of the water surface of the globe. But he was wrong in asserting that low temperature, pressure, and the absence of light preclude the possibility of life in very deep water.
Ehrenberg held the opposite opinion with regard to the conditions of life at the bottom of the sea, as may be seen from the following extract from a letter which he wrote to Maury in 1857: "The other argument for life in the deep which I have established is the surprising quantity of new forms which are wanting in other parts of the sea. If the bottom were nothing but the sediment of the troubled sea, like the fall of snow in the air, and if the biolithic curves of the bottom were nothing else than the product of the currents of the sea which heap up the flakes, similarly to the glaciers, there would necessarily be much less of unknown and peculiar forms in the depths. The surface and the borders of the sea are much more productive and much more extended than the depths; hence the forms peculiar to the depths should not be perceived. The great quantity of peculiar forms and of soft bodies existing in the innumerable carapaces, accompanied by the observation of the number of unknowns, increasing with the depth—these are the arguments which seem to me to hold firmly to the opinion of stationary life at the bottom of the deep sea."
It would appear to have been definitely established by the researches of the last fifty years that life in some of its many forms is universally distributed throughout the ocean. Not only in the shallower waters near coasts, but even in the greater depths of all oceans, animal life is exceedingly abundant. A trawling in a depth of over a mile yielded two hundred specimens of animals belonging to seventy-nine species and fifty-five genera. A trawling in a depth of about three miles yielded over fifty specimens belonging to twenty-seven species and twenty-five genera. Even in depths of four miles fishes and animals belonging to all the chief invertebrate groups have been procured, and in a sample of ooze from nearly five miles and a quarter there was evidence to the naturalists of the Challenger that living creatures could exist at that depth.
Recent oceanographic researches have also established beyond doubt that while in great depths the water is not subjected to the influence of superficial movements like waves, tides, and swift currents, there is an extremely slow movement, in striking contrast with the agitation of the surface water. Although the movement at the bottom is so slow that the ordinary means of measuring currents can not be applied accurately to them, the thermometer furnishes an indirect means of ascertaining their existence. Water is a very bad conductor of heat, and consequently a body of water at a given temperature passing into a region where the temperature conditions are different retains for a long time, and without much change, its original temperature. To illustrate: The bottom temperature near Fernando do Noronha, almost under the equator, is 0·2° C., or close upon the freezing point; it is obvious that this temperature was not acquired at the equator, where the mean annual temperature of the surface layer of the water is 21° C, and the mean normal temperature of the crust of the earth not lower than 8° C. The water must therefore have come from a place where the conditions were such as to give it a freezing temperature; and not only must it have come from such a place, but the supply must be continually renewed, however slowly, for otherwise its temperature would gradually rise by conduction and mixture. Across the whole of the North Atlantic the bottom temperature is considerably higher, so that the cold water can not be coming from that direction; on the other hand, we can trace a band of water at a like temperature at nearly the same depth continuously to the Antarctic Sea, where the conditions are normally such as to impart to it this low temperature. There seems, therefore, to be no doubt that there is a current from the antarctic to the equator along the bottom of the South Atlantic.
From the millions of reliable deep-sea soundings that have been made during the last forty years the more general features of the bathymetric chart of the world have been firmly established; and the ancient idea, derived chiefly from a supposed |
About file formats
Files are self-contained objects on a computer that store information. Several different file types serve a variety of purposes. Some store information about the operating system and user settings, while others contain programs, written documents, graphics, or sound. A particular file format is often indicated as part of a file's name by a file name extension (suffix). Conventionally, the extension is separated by a period from the name and contains three or four letters that identify the format. |
What Is Interstitial Nephritis?
Interstitial nephritis is a disorder of the kidneys that results when the spaces between the kidney tubules become swollen and inflamed . These spaces are also known as the interstitium. The tubules are the structures of the kidney responsible for filtering fluid.
The chronic form of interstitial nephritis seriously affects the way your kidney works. The kidney is responsible for filtering waste and excess fluid from the blood and excreting it into the urine.
Treatments for interstitial nephritis include adjusting troublesome medication, alleviating underlying causes, steroids, and dialysis.
You should go to your primary care physician(PCP) tomorrow for a sick visit.
Interstitial Nephritis Symptoms
Signs and symptoms of interstitial nephritis are not always obvious because the damage progresses slowly and over time. Signs and symptoms include:
- Loss of appetite
- Fatigue and weakness
- Sleep issues
- Persistent itching
- Urinary changes: The frequency and quantity of your urine may change.
- Muscle twitches and cramps: These may occur due to a buildup of electrolytes that the kidney is not able to filter.
- Swelling of feet and ankles (edema): This may occur due to fluid buildup due to the kidneys' inability to properly excrete excess fluid.
Chronic interstitial nephritis can result in complications of other organs and other conditions due to the kidney's importance in maintaining homeostasis (balance) within the body. This can result in the following due to impaired fluid regulation and other functions.
- Chest pain: If fluid builds up around the lining of the heart (pericarditis).
- Shortness of breath: If fluid builds up around the lungs (pulmonary edema).
- High blood pressure: This is due to excess fluid throughout the body in general which the heart has to pump against (hypertension).
- Electrolyte abnormalities: Since the kidney's ability to filter and excrete electrolytes is impaired, a rise in levels of electrolytes such as potassium (hyperkalemia) can result in life-threatening problems .
- Chronic kidney disease: Interstitial nephritis can often affect the kidney to the point where its function is significantly impaired — this condition is known as chronic kidney disease .
Interstitial Nephritis Causes
Anything that causes prolonged inflammation and damage to the spaces surrounding the kidney tubules can result in interstitial nephritis and even chronic kidney disease. The causes of kidney inflammation are varied, but can be divided into the following categories:
- Medications: Some medications, particularly certain antibiotics and nonsteroidal inflammatory drugs (NSAIDs) can be particularly toxic and damaging to the kidney . Persistent use of these drugs to treat other conditions can adversely affect the kidney and cause chronic inflammation, leading to chronic interstitial nephritis and chronic kidney disease.
- Infections: Bacteria that invade and infect the kidney cause a type of infection known as pyelonephritis. Pyelonephritis that is not treated or treated improperly can result in chronic inflammation of the kidney interstitium.
- Autoimmune conditions: Many inflammatory diseases that result in the body attacking itself can also affect the kidney and cause injury that results in interstitial nephritis. Conditions such as multiple sclerosis (MS) and lupus (SLE) are examples of such autoimmune diseases.
Interstitial Nephritis Symptom Checker
Take a quiz to find out if you have Interstitial Nephritis
Treatment Options and Prevention for Interstitial Nephritis
Chronic interstitial nephritis does not have a cure. Treatment focuses on addressing and treating the underlying cause of the inflammation, alleviating symptoms, and helping the kidney function as best as possible with medical management. Options for treatment include:
Medication may help alleviate symptoms of interstitial nephritis, in regards to the following.
- Discontinue certain drugs: If the inflammation is caused by a certain drug or class of drugs, your physician will discuss discontinuing the drug and using possible alternatives.
- Steroids: These are often used in the treatment of multiple autoimmune diseases and may alleviate some of the inflammation.
If your kidney function is significantly impaired to the point that it cannot maintain waste and fluid clearance on its own, your physician may suggest dialysis . Dialysis is a system that artificially removes waste products and extra fluid from your blood when your kidneys can no longer perform this function. There are two types, hemodialysis and peritoneal dialysis:
- Hemodialysis: A machine filters the waste and excess fluids from the blood.
- Peritoneal dialysis: A thin tube (catheter) inserted into your abdomen fills your abdominal cavity with a dialysis solution that absorbs waste and excess fluids and drains them from the body, carrying the waste outside of the body.
Your physician may also suggest this option if your kidney function is significantly impaired. A kidney transplant involves surgically replacing your defective kidney for a healthy kidney from a donor into your body. Transplanted kidneys can come from deceased or living donors. You'll need to take medications for the rest of your life, called immunosuppressants, to keep your body from rejecting the new kidney.
There are many things you can start doing at home to help control and alleviate some of your symptoms. These changes will not cure you of your chronic interstitial nephritis but may help slow the progression of the disease.
- Follow instructions on over-the-counter medications: Since medications are a primary trigger for interstitial nephritis, make sure to follow instructions on nonprescription pain-relievers such as aspirin or ibuprofen (NSAIDs). Taking too many at once can directly cause kidney damage. Ask your physician if these medications are safe for you especially if you have decreased kidney function.
- Watch your weight: Try to be physically active most days of the week and maintain a healthy weight . If you need to lose weight, talk with your physician about strategies for healthy weight loss. Often this involves increasing daily physical activity and reducing calories.
- Avoid smoking and cigarettes: Cigarette smoke can seriously damage your kidneys and only makes chronic kidney disease worse. If you are finding it hard to quit, talk to your physician about different strategies, or look into support groups and counseling that may help you quit .
As part of your treatment for chronic interstitial nephritis, your physician may recommend a special diet to help support your kidneys. Your physician may refer you to a dietitian who can provide a diet or suggestions to your current routine that may help your kidneys .
Depending on your situation, kidney function and overall health, your dietitian may recommend the following.
- Avoiding products with added salt: This includes many convenience foods, such as frozen dinners, canned soups, chips, and fast foods. Other foods with added salt include salty snack foods, canned vegetables, and processed meats and cheeses.
- Choosing foods lower in potassium: Your dietitian may recommend that you choose lower potassium foods at each meal. High-potassium foods include bananas, oranges, potatoes, spinach and tomatoes. Examples of low-potassium foods include apples, cabbage, carrots, green beans, grapes and strawberries.
- Limiting the amount of protein you consume: Your dietitian will estimate the appropriate number of grams of protein you need each day and make recommendations based on that amount. High-protein foods include lean meats, eggs, milk, cheese and beans. Low-protein foods include vegetables, fruits, bread and cereals.
Once your story is reviewed and approved by our editors, it will live on Buoy as a helpful resource for anyone who may be dealing with something similar. If you want to learn more, try Buoy Assistant.
When to Seek Further Consultation for Interstitial Nephritis
Hyperkalemia is a serious complication of chronic interstitial nephritis. If you experience the following symptoms all around the same time, go to the emergency room in order to get the appropriate blood tests and treatments:
- Muscle fatigue
- Abnormal heart rhythms
Questions Your Doctor May Ask to Determine Interstitial Nephritis
To diagnose this condition, your doctor would likely ask about the following symptoms and risk factors.
- Have you experienced any nausea?
- Are you sick enough to consider going to the emergency room right now?
- Has your fever gotten better or worse?
- Is your fever constant or come-and-go?
- How severe is your fever?
If you've answered yes to one or more of these questions
Interstitial Nephritis Symptom Checker
Take a quiz to find out if you have Interstitial Nephritis
- Muriithi AK, Leung N, Valeri AM, et al. Clinical characteristics, causes and outcomes of acute interstitial nephritis in the elderly. Kidney International. Published September 3, 2014. Kidney International Link
- Lee JW. Fluid and electrolyte disturbances in critically ill patients. Electrolytes & Blood Pressure. 2010;8(2):72-81. NCBI Link
- Thomas R, Kanso A, Sedor JR. Chronic kidney disease and its complications. Primary Care: Clinics in Office Practice. 2008;35(2):329-vii. NCBI Link
- Ejaz P, Bhojani K, Joshi VR. NSAIDs and kidney. Journal of the Association of Physicians of India. 2004;52:632-640. NCBI Link
- Types of dialysis. Stanford Health Care. Stanford Health Care Link
- Staying fit with kidney disease. National Kidney Foundation. National Kidney Foundation Link
- How to quit smoking. Centers for Disease Control and Prevention. Updated May 23, 2018. CDC Link
- Kidney-friendly diet for CKD. American Kidney Fund. American Kidney Fund Link
No ads, doctor reviewed. Let's crack your symptom code together - like us on Facebook to follow along. |
Learning Pronouns and Adverbs in Italian
Modern English has five interrogative pronouns–what, which, who, whose, and whom–that are used to facilitate the asking of a question and that replace a noun in a sentence. Sentences using interrogative pronouns are intended to elicit more than just a “yes” or “no” answer. These pronouns can refer to places and things and can take on the suffixes -soever or -ever.
In Italian interrogative pronouns, or pronomi interrogativi, are also used to introduce a question or interrogative sentence. Quale and quali (which one or ones), che and che cosa (what–as it applies to a thing), chi (who or whom), and quanto or quanti (how much or how many), are examples of Italian interrogative pronouns. Below are some samples of sentences containing interrogative pronouns:
Che vuoi? = What do you want?
Che cosa è questo? = What is this?
Chi è tuo marito? = Who is your husband?
Con chi stai parlando al telefono? = With whom are you talking on the phone?
Quale vestito da sposa indosserai? = Which bridal gown will you wear?
Quante persone vengono alla festa? = How many people are coming to the party?
Quanti anni hai? = How old are you?
In Italian a question never finishes with a preposition. Most of the question words are invariable, meaning that they do not have to agree with the value of the noun, however quale (which) and quante/quanti (how much), must agree. Before singular nouns we use quale and before plural nouns we use quali. For example:
Quale macchina compri? = Which car are you buying? – The sentence is referring to (1) car
Quali libri leggere? = Which books to read? – The sentence is referring to books, plural
For quante/quanti the pronoun must match not only the quantity, but whether the noun is masculine or feminine.
Quanto = masculine singular
Quanta = feminine singular
Quanti = masculine plural
Quante = feminine plural
Quanto vino posso bere? = How much wine can I drink?
Quanta pasta mangerà Maria? = How much pasta will Maria eat?
Quanti soldi stai guadagnando? = How many money are you earning?
Quante paste mangerai? = How many pastries will you eat?
Quante bugie! = How many lies!
Used in the same way as interrogative pronouns, interrogative adverbs enable the construction of sentences that refer to verbs or actions rather than nouns. Examples of English pronouns include why, where, how and when. Italian interrogative adverbs, or avverbi interrogativi, are come, quando, perchè, and dove.
Come? = How (the manner in which the verb is being acted out)
Quando? = When (the verb’s location in time)
Perchè? = Why (the purpose, the cause, or reason for the verb)
Dove? = Where (the verb’s location in space)
Come posso dimagrire? = How can I lose weight?
Quando è il tuo compleanno? = When is your birthday?
Perché mangi così tanto? = Why do you eat so much?
Dove è Maria? = Where is Maria?
Insert the correct interrogative pronoun
- 1) Chi è quella ragazza con il vestito blu? = _______ is the girl in the blue dress?
- 2) Qual’è il nome della squadra rossa? = _______ is the name of the team in red?
- 3) A chi piace giocare a calcio? = _______ likes to play football?
- 4) Per chi Maria ha fatto questi biscotti? = For _______ did Maria make these cookies?
- 5) Quali nomi hai scelto per il tuo gatto? = _______names did you choose for your cat?
- 6) Chi hai visto ieri sera? = _______did you see last night?
- 7) A chi dovrebbe essere pagato l’assegno? = To_______should the check be made payable?
By Elisa Bressan |
What is osteoporosis?
Osteoporosis is a health condition that weakens bones, making them fragile and more likely to break. The condition, which affects over 3 million people in the UK, develops slowly over several years and is often only diagnosed when a fall or sudden impact causes a bone to break (fracture).
Why is physical activity important for osteoporosis?
If you're looking to reduce your risk of developing osteoporosis or manage your current condition, regular physical activity is essential.
Keeping active helps to improve your overall cardiovascular fitness, strength, balance and bone density, which reduces your risk of developing the condition. If you have already been diagnosed with osteoporosis and weak bones, physical activity is extremely important because keeping active is one of the best ways you can reduce your risk of falls and fractures. It can also help to reduce pain.
If you have been diagnosed with osteoporosis, you may be fearful of taking part in exercise. But if you stop moving, you'll slowly lose strength and balance which will make you even more prone to falls and bone breaks.
How much physical activity should you be doing?
In order to reduce your risk of developing osteoporosis or manage your current condition, you should aim to take part in the recommended amount of physical activity for your age group, as outlined in the UK Chief Medical Officer's Physical Activity Guidelines.
For adults aged 19 and over, the recommended amount is at least 150 minutes of moderate intensity physical activity per week. Where possible, this should be a combination of cardiovascular, strength, flexibility and balance exercises. These could include:
- Cardiovascular activities - brisk walking, cycling, swimming, dancing
- Strength activities - resistance training, Yoga, Nordic Walking, carrying heavy shopping, heavy gardening
- Balance/mobility - Yoga, Pilates, Tai Chi, body balance classes
Weight-bearing exercises and resistance exercises are particularly important for helping to preventing osteoporosis as these types of exercises both help to improve bone density.
- Weight-bearing exercises - weight-bearing exercises are exercises where your feet and legs support your weight. Examples of these include walking, running, skipping, dancing, aerobics and even jumping up and down on the spot. These are all extremely useful ways to strength your muscles, ligaments and joints.
- Resistance exercises - resistance exercises are exercises that use muscle strength. Examples of these include press-ups, weightlifting or using weight equipment at a gym.
If you have already been diagnosed with osteoporosis, you may need to avoid some types of high-impact exercises (e.g. running and jumping) as these could increase your risk of fracture. Instead, you should aim to take part in activities that help reduce your risk of falls and fractures. Recommended exercises for osteoporosis include:
- Tai Chi
- Flexibility exercises
- Chair-based exercises
- Low-impact dancing
- Low-impact aerobics
- Cross-training machines
You may also be able to find an exercise referral scheme in your area that caters specifically for people with osteoporosis. |
By convention historians place the early modern period between c.1500 and c.1800 following the Late Middle Ages. The beginning of this period is marked by the initial process of colonization, the emergence of centralized governments and the burgeoning of present-day recognized interstate systems.
The Muslim empire extended to North and East of Africa. The western Africa hosted the natives predominantly. The Southeast Indian empires were pivotal to provide for the the spice trade. At this time the Great Mughal empire ruled the major part of the Indian subcontinent. The Sultanate of Malacca, the group of icelandic empires, governed the southern part the Indian subcontinent.
The Asian part of the globe was under the rule of Chinese dynasties and the Japanese despots. The Edo period in Japan from 1600 to 1868 is also considered the early modern period of the region which holds in Korea from the inception of Joseon Dynasty to the exaltation of King Gojong. By this time in Americas, the native Americans had cultured enormous civilizations such as, the Aztec empire, the Chibcha Confederation, the Inca civilization, the Mayan Empire and cities. On the other hand, the movements in the Europe were getting reformed and the empires were expanding. By 1647, the Russians were able to wield the Pacific coast and eventually established their control in the Russian Far East through the 19th century.
Many years later, the religious fervor of the Muslim regimes seems to dissipate and the expansion of the Muslim dynasties seems to end. The Christendom, on the other hand, witnessed and factional disintegration in the Roman Catholic faith as the crusades ended. The Protestant Reformation found the way to burgeon.
Over the course of the early modern period, the Western European nations (The United Kingdom, France, Spain, Netherlands and Portugal) flourished in the Age of Discovery and trade. They kept with their expansionist and colonizing designs in the Northern and Southern part of America. Turkey did the same in Southeastern Europe, and parts of the North Africa and Middle East. The Russians seized control in Eastern Europe, North America and Asia.
(n.b: To see my other blogs you may click on the following link of mine. Read, like, share, buzz and comment as you please. Kindly also don’t forget to subscribe me, thank you!) |
Presentation on theme: "Seeing 3D from 2D Images. How to make a 2D image appear as 3D! ► Output and input is typically 2D Images ► Yet we want to show a 3D world! ► How can we."— Presentation transcript:
Seeing 3D from 2D Images
How to make a 2D image appear as 3D! ► Output and input is typically 2D Images ► Yet we want to show a 3D world! ► How can we do this? We can include ‘cues’ in the image that give our brain 3D information about the scene These cues are visual depth cues
Visual Depth Cues ► Cues about the 3 rd dimension – total of 10 ► Monoscopic Depth Cues (single 2D image) ► Stereoscopic Depth Cues (two 2D images) ► Motion Depth Cues (series of 2D images) ► Physiological Depth Cues (body cues) Hold a finger up
Monoscopic Depth Cues ► Interposition An occluding object is closer ► Shading Shape and shadows ► Size The larger object is closer ► Linear Perspective Parallel lines converge at a single point Higher the object is (vertically), the further it is ► Surface Texture Gradient More detail for closer objects ► Atmospheric effects Further away objects are blurrier and dimmer ► Images from
Monoscopic Depth Cues ► Interposition An object that occludes another is closer ► Shading Shape info. Shadows are included here ► Size Usually, the larger object is closer ► Linear Perspective parallel lines converge at a single point ► Surface Texture Gradient more detail for closer objects ► Height in the visual field Higher the object is (vertically), the further it is ► Atmospheric effects further away objects are blurrier ► Brightness further away objects are dimmer
Stereoscopic Display Issues ► Stereopsis ► Stereoscopic Display Technology ► Computing Stereoscopic Images ► Stereoscopic Display and HTDs. ► Works for objects < 5m. Why?
Stereopsis The result of the two slightly different views of the world that our laterally-displaced eyes receive.
Retinal Disparity If both eyes are fixated on a point, f 1, in space: Image of f 1 is focused at corresponding points in the center of the fovea of each eye. f 2, would be imaged at points in each eye that may at different distances from the fovea. This difference in distance is the retinal disparity.
Retinal Disparity ► If an object is farther than the fixation point, the retinal disparity will be: Positive value Uncrossed disparity Eyes must uncross to fixate the farther object. ► If an object is closer than the fixation point, the retinal disparity will be: Negative Crossed disparity Eyes must cross to fixate the closer object. ► An object located at the fixation point or whose image falls on corresponding points in the two retinae has: Zero disparity (in focus) ► Question: What does this mean for rendering systems? f1 f2 Left EyeRight Eye Retinal disparity =
Convergence Angles i f2 f1 D1 D2 a b c d 11 +a+c+b+d = 180 +c+d = 180 - = a+(-b) = 1+ 2 = Retinal Disparity 22
Miscellaneous Eye Facts ► Stereoacuity - the smallest depth that can be detected based on retinal disparity. ► Visual Direction - Perceived spatial location of an object relative to an observer.
Horopters ► Map out what points would appear at the same retinal disparity. ► Horopter - the locus of points in space that fall on corresponding points in the two retinae when the two eyes binocularly fixate on a given point in space (zero disparity). ► Points on the horopter appear at the same depth as the fixation point. (can’t use stereopsis. ► What is the shape of a horopter? Vieth-Mueller Circle f1 f2
Stereoscopic Display ► Stereoscopic images are easy to do badly, hard to do well, and impossible to do correctly.
Stereoscopic Displays ► Stereoscopic display systems presents each eye with a slightly different view of a scene. Time-parallel – 2 images same time Time-multiplexed – 2 images one right after another
Time Parallel Stereoscopic Display Two Screens ► Each eye sees a different screen ► Optical system directs correct view ► HMD stereo Single Screen ► Two different images projected ► Images are polarized at right angles ► User wears polarized glasses
Passive Polarized Projection ► Linear Polarization Ghosting increases when you tilt head Reduces brightness of image by about ½ Potential Problems with Multiple Screens ► Circular Polarization Reduces ghosting Reduces brightness Reduces crispness
Problem with Linear Polarization ► With linear polarization, the separation of the left and right eye images is dependent on the orientation of the glasses with respect to the projected image. ► The floor image cannot be aligned with both the side screens and the front screens at the same time.
Time Multiplexed Display ► Left and right-eye views of an image are computed ► Alternately displayed on the screen ► A shuttering system occludes the right eye when the left-eye image is being displayed
Stereographics Shutter Glasses
Screen Parallax P left – Point P projected screen location as seen by left eye P right – Point P projected screen location as seen by right eye Screen parallax - distance between P left and P right P Left eye position Right eye position P left P right P left P Display Screen Object with positive parallax Object with negative parallax
Screen Parallax (cont.) p = i(D-d)/D where p is the amount of screen parallax for a point, f1, when projected onto a plane a distance d from the plane containing two eyepoints. i is the interocular distance between eyepoints and D is the distance from f1 to the nearest point on the plane containing the two eyepoints d is the distance from the eyepoint to the nearest point on the screen
How to create correct left- and right-eye views ► What do you need to specify for most rendering engines? Eyepoint Look-at Point Field-of-View or location of Projection Plane View Up Direction P Left eye position Right eye position P left P right P left P Display Screen Object with positive parallax Object with negative parallax
Basic Perspective Projection Set Up from Viewing Paramenters Y Z X Projection Plane is orthogonal to one of the major axes (usually Z). That axis is along the vector defined by the eyepoint and the look-at point.
What doesn’t work Each view has a different projection plane Each view will be presented (usually) on the same plane
What Does Work ii
Setting Up Projection Geometry Look at point Eye Locations Look at points Eye Locations No Yes
Visual Angle Subtended Screen parallax is measured in terms of visual angle. This is a screen independent measure. Studies have shown that the maximum angle that a non-trained person can usually fuse into a 3D image is about 1.6 degrees. This is about 1/2 the maximum amount of retinal disparity you would get for a real scene.
Accommodation/ Convergence Display Screen
Position Dependence (without head-tracking)
Interocular Dependance F Modeled Point Perceived Point Projection Plane True Eyes Modeled Eyes
Obvious Things to Do ► Head tracking ► Measure User’s Interocular Distance
Another Problem ► Many people can not fuse stereoscopic images if you compute the images with proper eye separation! ► Rule of Thumb: Compute with about ½ the real eye separation. ► Works fine with HMDs but causes image stability problems with HTDs (why?)
Two View Points with Head-Tracking Projection Plane Modeled Point Perceived Points Modeled Eyes True Eyes
Ghosting ► Affected by the amount of light transmitted by the LC shutter in its off state. ► Phosphor persistence ► Vertical screen position of the image.
Time-parallel stereoscopic images ► Image quality may also be affected by Right and left-eye images do not match in color, size, vertical alignment. Distortion caused by the optical system Resolution HMDs interocular settings Computational model does not match viewing geometry.
Motion Depth Cues ► Parallax created by relative head position and object being viewed. ► Objects nearer to the eye move a greater distance ► (Play pulfrich video without sunglasses)
Physiological Depth Cues ► Accommodation – focusing adjustment made by the eye to change the shape of the lens. (up to 3 m) ► Convergence – movement of the eyes to bring in the an object into the same location on the retina of each eye.
Summary ► Monoscopic – Interposition is strongest. ► Stereopsis is very strong. ► Relative Motion is also very strong (or stronger). ► Physiological is weakest (we don’t even use them in VR!) ► Add as needed ex. shadows and cartoons
Pulfrich Effect ► Neat trick ► Different levels of illumination require additional time (your frame rates differ base of amount of light) ► What if we darken one image, and brighten another? ► ► ulfrich.avi |
A bladder infection is a bacterial infection within the bladder. Some people call a bladder infection a urinary tract infection (UTI). This refers to a bacterial infection anywhere in the urinary tract, such as the bladder, kidneys, ureters, or the urethra. While most cases of bladder infection occur suddenly (acute), others may recur over the long term (chronic). Early treatment is key to preventing the spread of the infection.
Bacteria that enter through the urethra and travel into the bladder cause bladder infections. Normally, the body removes the bacteria by flushing them out during urination. Men have added protection with the prostate gland, which secretes protective hormones as a safeguard against bacteria. Still, sometimes bacteria can attach to the walls of the bladder and multiply quickly. This overwhelms the body’s ability to destroy them, resulting in a bladder infection.
According to the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), most bladder infections are caused by Escherichia coli (E. coli). This type of bacteria is naturally present in the large intestines. An infection may occur if there are too many bacteria in the body or if they are not eliminated through urination.
Chlamydia and Mycoplasma are other bacteria that can cause infections. However, unlike E. coli, these are typically transmitted only through sexual intercourse, and they can also affect the reproductive organs in addition to your bladder.
Anyone can get bladder infections, but women are more prone to getting them than men. This is because women have shorter urethras, making the path to the bladder easier for bacteria to reach. Females’ urethras are also located closer to the rectum than men’s urethras. This means there is a shorter distance for bacteria to travel.
Other factors can increase the risk of bladder infections for both men and women. These include:
- advanced age
- insufficient fluid intake
- surgical procedure within the urinary tract
- a urinary catheter
- urinary obstruction, which is a blockage in the bladder or urethra
- urinary tract abnormality, which is caused by birth defects or injuries
- urinary retention, which means difficulty emptying the bladder
- narrowed urethra
- enlarged prostate
- bowel incontinence
While women are overall more prone to bladder infections, men are not completely immune to them. Furthermore, the NIDDK says that bladder infections in men tend to recur after the first infection. This is because bacteria can make their way to tissues within the prostate gland and hide within the tissues.
The symptoms of a bladder infection vary depending on the severity. You’ll immediately notice changes during urination. As the infection progresses, pain also occurs.
Some of the most common symptoms include:
- cloudy or bloody urine
- urinating more often than usual
- foul-smelling urine
- pain or burning when urinating
- a frequent sensation of having to urinate, which is called urgency
- cramping or pressure in the lower abdomen or lower back
Bladder infections can also cause back pain. This pain is associated with pain in the kidneys. Unlike muscular back pain, you might experience pain on both sides of your back or the middle of your back. Such symptoms mean the bladder infection has likely spread to the kidneys. A kidney infection can also cause a low fever.
A doctor can diagnose your bladder infection by performing a urinalysis. This is a test performed on a sample of urine to check for the presence of:
- white blood cells
- red blood cells
- other chemicals that are present in the urine when there is a bladder infection
Your doctor may also perform a urine culture, which is a test to determine the type of bacteria in the urine. Once the type of bacteria is known, testing the bacteria for antibiotic sensitivity is performed to determine what antibiotic will best treat the infection.
Bladder infections are treated with prescription medications to kill the bacteria and relieve pain and burning. Home treatments may also help relieve symptoms and cure the infection.
Oral antibiotics are used to kill the bacteria that are causing the bladder infection. If you’re experiencing pain and burning sensations, your doctor may also prescribe medication to relieve those symptoms. The most common medication for relieving the pain and burning associated with bladder infections is called phenazopyridine (Pyridium).
Plenty of fluids can help flush the bacteria out of your bladder, but water is best. Your doctor may recommend that you take over-the-counter ascorbic acid (vitamin C) or drink cranberry juice to increase the acid levels in your urine, which helps to kill the bacteria. Another benefit of cranberry juice is that it prevents bacteria from sticking to the bladder walls.
Certain lifestyle changes may reduce your chances of getting a bladder infection. If you have been experiencing recurrent bladder infections, your doctor may recommend prophylactic treatment. This consists of antibiotics taken in small daily doses to prevent or control future bladder infections.
The following lifestyle changes may reduce or eliminate the occurrence of bladder infections:
- drink six to eight glasses of water a day, but consult with your doctor about the correct amount of fluid to drink if you have kidney failure
- drink cranberry juice daily
- urinate as soon as you feel the need
- wipe from front to back after urinating if you are female
- don’t use douches, feminine hygiene sprays, or powders
- take showers instead of baths
- wear cotton underwear and loose-fitting clothes
- change your underwear daily
- wear sanitary pads instead of tampons
- avoid using a diaphragm or spermicide and change to an alternate form of birth control
- use nonspermicidal lubricated condoms
- urinate before and after sexual activity
Preventive antibiotic treatment
If you’re a woman experiencing recurrent bladder infections, your doctor may give you a prescription for daily antibiotics to prevent infections or to take when you feel the symptoms of a bladder infection. They may also have you take a single dose of an antibiotic after sexual activity.
Most bladder infections subside within 48 hours of taking the appropriate antibiotic. Some bladder infections spread to the kidneys due to antibiotic-resistant strains of bacteria or other health problems.
Chronic bladder infections require a combination of treatment and more aggressive preventive measures. Long-term daily antibiotics may be necessary in some extreme cases. Being proactive about bladder infections can help reduce their occurrence, as well as the pain that accompanies them. The earlier you seek treatment, the less likely that the infection will spread. |
Bar charts, are one of the most basic charts around -- and have been since the Scottish engineer and political economist William Playfair, the founder of graphical methods of statistics, invented them in 1786. Playfair was a pioneer in the use of graphical displays, and the bar chart made its appearance in The Commercial and Political Atlas.
A bar chart has rectangular bars of the same height with widths that are proportional to the values they represent, with each bar representing a particular category. Their x-axis is quantitative, while their y-axis is categorical. Column charts are the reverse; the width of the rectangles is the same, while the height changes. Their x-axis is categorical, while their y-axis is quantitative. Essentially, in a bar chart, the bars are horizontally oriented, while in a column chart the bars are vertically oriented.
Bar and column charts are most often used to make comparisons between items. The length of each bar is proportional to a specific category.
Use bar charts to compare between different groups or to track changes over time. Keep in mind, though, that bar charts work best when the changes are larger and easier to see. |
Penn Metamaterials Experts Show a Way to Reduce Electrons’ Effective Mass to Nearly Zero
The field of metamaterials involves augmenting materials with specially designed patterns, enabling those materials to manipulate electromagnetic waves and fields in previously impossible ways.
Now, researchers from the University of Pennsylvania have come up with a theory for moving this phenomenon onto the quantum scale, laying out blueprints for materials where electrons have nearly zero effective mass.
Such materials could make for faster circuits with novel properties.
The work was conducted by Nader Engheta, the H. Nedwill Ramsey Professor of Electrical and Systems Engineering in Penn’s School of Engineering and Applied Science, and Mario G. Silveirinha, who was a visiting scholar at the Engineering School when their collaboration began. He is currently an associate professor at the University of Coimbra, Portugal.
Their paper was published in the journal Physical Review B: Rapid Communications.
Their idea was born out of the similarities and analogies between the mathematics that govern electromagnetic waves — Maxwell’s Equations — and those that govern the quantum mechanics of electrons — Schrödinger’s Equations.
On the electromagnetic side, inspiration came from work the two researchers had done on metamaterials that manipulate permittivity, a trait of materials related to their reaction to electric fields. They theorized that, by alternating between thin layers of materials with positive and negative permittivity, they could construct a bulk metamaterial with an effective permittivity at or near zero. Critically, this property is only achieved when an electromagnetic wave passes through the layers head on, against the grain of the stack. This directional dependence, known as anisotropy, has practical applications.
The researchers saw parallels between this phenomenon and the electron transport behavior demonstrated in Leo Esaki’s Nobel Prize-winning work on superlattices in the 1970s: semiconductors constructed out of alternating layers of materials, much like the permittivity-altering metamaterial.
A semiconductor’s qualities stem from the lattice-like pattern its constituent atoms are arranged in; an electron must navigate the electric potentials of all of these atoms, moving faster or slower depending on how directly it can pass by them. Esaki and his colleagues showed that, by making a superlattice out of layers of different materials, they could produce a composite material that had different electron transport properties than either of the components.
Though the actual mass of electrons is fixed, Engheta and Silveirinha thought the same principle could be applied to the effective mass of the electron. Engineers have been tailoring materials to alter the effective mass of electrons for decades; existing semiconductors that give electrons a negative effective mass were a prerequisite for the team's new theory.
“Imagine you have a ball inside a fluid,” Engheta said. “You can calculate how fast the ball falls as a combination of the force of gravity and the reaction of the fluid, or you can say that the ball has an effectively different mass in the fluid than it does normally. The effective mass can even be negative, which we see in the case of a bubble. The bubble looks like it has negative mass, because it’s moving against gravity, but it is really the fluid moving down around it.”
Like the optical metamaterial with alternating bands of positive and negative permittivity, Engheta and Silveirinha theorized, a material with alternating bands of positive and negative effective electron mass would allow the overall structure's effective electron mass to approach zero.
And like the optical metamaterial, the electron’s effective mass in this case would be anisotropic. While travelling against the grain of the alternating materials, its effective mass would be near-zero, and thus it would travel very fast. But trying to move the electron along the grain would result in a very high effective mass, making it very difficult for it to move at all.
“In the direction the electrons are collimated, we see an effective mass of zero,” Engheta said. “This is like what we see with graphene, where electrons have an effective mass of zero but only along its plane.
“But a plane of graphene is only one atom thick, whereas here we would see that property in a bulk material. It’s essentially like the material has wires running through it, even though there is no wire surface.”
As with graphene, the properties of this composite material would be dependent on structure at the smallest scale; a few stray atoms could significantly degrade the material’s overall performance. A single uniform layer of atoms is ideal in both cases, and, while deposition techniques are improving, working at the scale of a few nanometers still represents a physical challenge. The team hopes to address this challenge in future studies.
“While physics prevents us from having infinite velocity, having materials that give electrons near-zero effective mass will let us move them much faster,” Engheta said.
The research was supported by the U.S. Air Force Office of Scientific Research.
Evan Lerner | EurekAlert! |
Israeli Electoral System
How it developed and how it works.
The Israeli political system can often appear bewildering to those more familiar with the electoral system of the United States.
The American voter elects individual representatives of district constituencies, while the Israeli voter selects from amongst lists of candidates for the Knesset (Parliament) throughout the country. The American tradition stresses strict separation between the legislative and executive branches (i.e., Congress and the President), while in Israel elected officials often serve simultaneously in both branches.
In the United States, presidents expect to serve out their terms in office barring death or Nixonian-level scandal, and elections for president are conducted under a strict schedule, occurring every four years. In Israel, the prime minister can find himself or herself removed from office on any given day by an act of the Knesset, leading to unscheduled "early elections."
Even the words used in the different countries can mean different things. In the U.S., "the government" generally refers to all public officials, elected or appointed, but in Israel the government is roughly equivalent to what the Cabinet is in Washington.
These distinctions are due to the fact that the Israeli system stems from traditions far removed from North America. The roots of the Israeli electoral system, like many other aspects of Israeli society, go back to Central and Eastern Europe in the early years of the 20th century. The political traditions of that place and time stressed a lively ferment of multiple parties and broad ranges of beliefs and manifestos ranging from communism to extreme right and everything in between.
The politics of the early Zionist movement, and later the Jewish community in British Mandate Palestine, reflected this tradition of pluralistic party multiplicity.
The general Zionist movement prior to the creation of the State of Israel in 1948 included socialist parties, communists, liberals, various religious movements, and a rightist revisionist party. When the state was established, shoehorning such an expansive spectrum of views into a two-party system, as per the Anglo-American traditions, was unthinkable. In order to ensure that all opinions, including minority ones, would be guaranteed expression, representative bodies were elected under a proportional system in which each party had a number of representatives in exact proportion to the number of votes cast for that party so that even parties garnering as little as one percent of the total votes would have a voice.
In 1988 this threshold for representation was raised to 1.5 percent in an attempt to prevent extremist minority views—in this case a political party that was later disqualified from candidacy because it was deemed racist—from gaining representation in the Knesset. In 2006 the threshold for representation was raised again, to two percent.
Did you like this article? MyJewishLearning is a not-for-profit organization. |
All types of hydraulic motors have common design features: a driving surface area subject to pressure differential; a way of timing the porting of pressure fluid to the pressure surface to achieve continuous rotation; and a mechanical connection between the surface area and an output shaft.
The ability of the pressure surfaces to withstand force, the leakage characteristics of each type of motor, and the efficiency of the method used to link the pressure surface and the output shaft determine the maximum performance of a motor in terms of pressure, flow, torque output, speed, volumetric and mechanical efficiencies, service life, and physical configuration.
Motor displacement refers to the volume of fluid required to turn the motor output shaft through one revolution. The most common units of motor displacement are in.3 or cm3 per revolution. Hydraulic motor displacement may be fixed or variable. A fixed-displacement motor provides constant torque. Controlling the amount of input flow into the motor varies the speed. A variable-displacement motor provides variable torque and variable speed. With input flow and pressure constant, varying the displacement can vary the torque speed ratio to meet load requirements.
Torque output is expressed in inch-pounds or foot-pounds. It is a function of system pressure and motor displacement. Motor torque ratings usually are given for a specific pressure drop across the motor. Theoretical figures indicate the torque available at the motor shaft, assuming no mechanical losses.
Breakaway torque is the torque required to get a stationary load turning. More torque is required to start a load moving than to keep it moving.
Running torque can refer to a motor’s load or to the motor. When it refers to a load, it indicates the torque required to keep the load turning. When it refers to the motor, it indicates the actual torque that a motor can develop to keep a load turning. Running torque considers a motor’s inefficiency and is a percentage of its theoretical torque. The running torque of common gear, vane, and piston motors is approximately 90% of theoretical.
Starting torque refers to the capacity of a hydraulic motor to start a load. It indicates the amount of torque that a motor can develop to start a load turning. In some cases, this is considerably less than the motor’s running torque. Starting torque also can be expressed as a percentage of theoretical torque. Starting torque for common gear, vane, and piston motors ranges between 70% and 80% of theoretical.
Mechanical efficiency is the ratio of actual torque delivered to theoretical torque.
Torque ripple is the difference between minimum and maximum torque delivered at a given pressure during one revolution of the motor.
Motor speed is a function of motor displacement and the volume of fluid delivered to the motor.
Maximum motor speed is the speed at a specific inlet pressure that the motor can sustain for a limited time without damage.
Minimum motor speed is the slowest, continuous, uninterrupted rotational speed available from the motor output shaft.
Slippage is the leakage through the motor, or the fluid that passes through the motor without performing work.
External gear motors consist of a pair of matched gears enclosed in one housing (Fig. 1). Both gears have the same tooth form and are driven by pressure fluid. One gear is connected to an output shaft. The other is an idler. Pressure fluid enters the housing at a point where the gears mesh. It forces the gears to rotate and follows the path of least resistance around the periphery of the housing. The fluid exits at low pressure at the opposite side of the motor. Close tolerances between gears and housing help control fluid leakage and increase volumetric efficiency. Wear plates on the sides of the gears keep the gears from moving axially and help control leakage.
Internal gear motors fall into two categories. A direct-drive gerotor motor consists of an inner-outer gear set and an output shaft (Fig. 2). The inner gear has one fewer tooth than the outer. The teeth are shaped so all of the teeth of the inner gear are in contact with some portion of the outer gear at all times. When pressure fluid is introduced into the motor, both gears rotate. The motor housing has integral kidney-shaped inlet and outlet ports. The centers of rotation of the two gears are separated by a given amount known as the eccentricity. The center of the inner gear coincides with the center of the output shaft.
Pressure fluid enters the motor through the inlet port (Fig. 2a). Because the inner gear has one fewer tooth than the outer, a pocket is formed between inner teeth 6 and 1 and outer socket A. The kidney-shaped inlet port is designed so that just as this pocket’s volume reaches its maximum, fluid flow is shut off, with the tips of inner gear teeth 6 and 1 providing a seal (Fig. 2b).
As the pair of inner and outer gears continues to rotate, a new pocket is formed between inner teeth 6 and 5 and outer socket G (Fig. 2c). Meanwhile, the pocket formed between inner teeth 6 and 1 and outer socket A has moved around opposite the kidney-shaped outlet port, steadily draining as the volume of the pocket decreases. The gradual, metered volume change of the pockets during inlet and exhaust provides smooth, uniform fluid flow with a minimum of pressure variation (or ripple).
Because of the extra tooth in the outer gear, the inner gear teeth move ahead of the outer by one tooth per revolution. In Figure 2c, inner tooth 4 is seated in outer socket E. On the next cycle, inner tooth 4 will seat in outer socket F. This produces a low relative differential speed between the gears.
An orbiting gerotor motor consists of a set of matched gears, a coupling, an output shaft, and a commutator or valve plate (Fig. 3). The stationary outer gear has one more tooth than the rotating inner gear. The commutator turns at the same rate as the inner gear and always provides pressure fluid and a passageway to tank to the proper spaces between the two gears.
In operation, tooth 1 of the inner gear is aligned exactly in socket D of the outer gear (Fig. 3a). Point y is the center of the stationary gear, and point x is the center of the rotor. If there were no fluid, the rotor would be free to pivot about socket D in either direction. It could move toward seating tooth 2 in socket E or, conversely, toward seating tooth 6 in socket J.
When pressure fluid flows into the lower half of the volume between the inner and outer gears, if a passageway to tank is provided for the upper-half volume between the inner and outer gears, a moment is induced that rotates the inner gear counterclockwise and starts to seat tooth 2 in socket E. Tooth 4, at the instant shown in Figure 3a, provides a seal between pressure and return fluid.
However, as rotation continues, the locus of point x is clockwise. As each succeeding tooth of the rotor seats in its socket, the tooth directly opposite on the rotor from the seated tooth becomes the seal between pressure and return fluid (Fig. 3b). The pressurized fluid continues to force the rotor to mesh in a clockwise direction while it turns counterclockwise.
Because of the one extra socket in the fixed gear, the next time tooth 1 seats, it will be in socket J. At that point, the shaft has turned one-seventh of a revolution, and point x has moved six-sevenths of its full circle. In Figure 3c, tooth 2 has mated with socket D, and point x has again become aligned between socket D and point y, indicating that the rotor has made one full revolution inside of the outer gear. Tooth 1 has moved through an angle of 60° from its original point in Figure 3a; 42 (or 627) tooth engagements or fluid cycles would be needed for the shaft to complete one revolution.
The commutator or valve plate contains pressure and tank passages for each tooth of the rotor (Fig. 3d, e, and f). The passages are spaced so they do not provide for pressure or return flow to the appropriate port as a tooth seats in its socket. At all other times, the passages are blocked or are providing pressure fluid or a tank passage in the appropriate half of the motor between gears.
A roller-vane gerotor motor is a variation of the orbiting gerotor motor (Fig. 4). It has a stationary ring gear (or stator) and a moving planet gear (or rotor). Instead of being held by two journal bearings, the eccentric arm of the planetary is held by the meshing of the six-tooth rotor and seven-socket stator. Instead of direct contact between the stator and rotor, roller vanes are incorporated to form the displacement chambers. The roller vanes reduce wear, enabling the motors to be used in closed-loop, high-pressure hydrostatic circuits as direct-mounted wheel drives.
Vane motors have a slotted rotor mounted on a driveshaft that is driven by the rotor (Fig. 5). Vanes, closely fitted into the rotor slots, move radially to seal against the cam ring. The ring has two major and two minor radial sections joined by transitional sections or ramps. These contours and the pressures introduced to them are balanced diametrically.
In some designs, light springs force the vanes radially against the cam contour to ensure a seal at zero speed so the motor can develop starting torque. The springs are assisted by centrifugal force at higher speeds. Radial grooves and holes through the vanes equalize radial hydraulic forces on the vanes at all times.
Pressure fluid enters and leaves the motor housing through openings in the side plates at the ramps. Pressure fluid entering at the inlet ports moves the rotor counterclockwise. The rotor transports the fluid to the ramp openings at the outlet ports to return to tank. If pressure were introduced at the outlet ports, it would turn the motor clockwise.
The rotor is separated axially from the side plate surfaces by the fluid film. The front side plate is clamped against the cam ring by pressure and maintains optimum clearances as temperature and pressure change dimensions.
Vane motors provide good operating efficiencies, but not as high as those of piston motors. However, vane motors generally cost less than piston motors of corresponding horsepower ratings. The service life of a vane motor usually is shorter than that of a piston motor, though. Vane motors are available with displacements of 20 in.3/rev. Some low-speed/high-torque models come with displacements to 756 in.3/rev. Except for the high-displacement, low-speed models, vane motors have limited low-speed capability.
Radial-piston motors have a cylinder barrel attached to a driven shaft (Fig. 6). The barrel contains a number of pistons that reciprocate in radial bores. The outer piston ends bear against a thrust ring. Pressure fluid flows through a pintle in the center of the cylinder barrel to drive the pistons outward. The pistons push against the thrust ring and the reaction forces rotate the barrel.
Shifting the slide block laterally to change the piston stroke varies motor displacement. When the centerlines of the cylinder barrel and housing coincide, there is no fluid flow and therefore the cylinder barrel stops. Moving the slide past center reverses the direction of motor rotation.
Radial piston motors are very efficient. Although the high degree of precision required in the manufacture of radial piston motors raises initial costs, they generally have a long life. They provide high torque at relatively low shaft speeds and excellent low-speed operation with high efficiency. Also, they have limited high-speed capabilities. Radial piston motors have displacements to 1,000 in.3/rev.
Axial-piston motors also use the reciprocating piston motion principle to rotate the output shaft, but motion is axial, rather than radial. Their efficiency characteristics are similar to those of radial-piston motors. Initially, axial-piston motors cost more than vane or gear motors of comparable horsepower. Like radial piston motors, they also have a long operating life. Consequently, their higher initial cost may not truly reflect the expected overall costs during the life of a piece of equipment.
In general, axial piston motors have excellent high-speed capabilities. Unlike radial piston motors, though, they are limited at low operating speeds. The inline type will operate smoothly down to 100 rpm, and the bent-axis type will provide smooth output down to the 4-rpm range. Axial piston motors are available with displacements from a fraction to 65 in.3/rev.
Inline-piston motors generate torque through pressure exerted on the ends of pistons that reciprocate in a cylinder block (Fig. 7). In the inline design, the motor driveshaft and cylinder block are centered on the same axis. Pressure at the ends of the pistons causes a reaction against a tilted swashplate and rotates the cylinder block and motor shaft. Torque is proportional to the area of the pistons and is a function of the angle at which the swashplate is positioned.
These motors are built in fixed- and variable-displacement models. The swashplate angle determines motor displacement. In the variable model, the swashplate is mounted in a swinging yoke, and the angle can be changed by various means, ranging from a simple lever or hand-wheel to sophisticated servo controls. Increasing the swashplate angle increases the torque capacity but reduces driveshaft speed. Conversely, reducing the angle reduces the torque capacity but increases driveshaft speeds (unless fluid pressure decreases). Angle stops are included so torque and speed stay within operating limits.
A compensator varies motor displacement in response to changes in the workload. A spring-loaded piston is connected to the yoke and moves it in response to variations in operating pressure. Any load increase is accompanied by a corresponding pressure increase as a result of the additional torque requirements. The control then automatically adjusts the yoke so torque increases when the load is light. Ideally, the compensator regulates displacement for maximum performance under all load conditions up to the relief valve setting.
Bent-axis piston motors develop torque through a reaction to pressure on reciprocating pistons (Fig. 8). In this design, the cylinder block and driveshaft are mounted at an angle to each other. The reaction is against the drive-shaft flange.
Speed and torque change with changes in the angle, from a predetermined minimum speed with a maximum displacement and torque at an angle of approximately 30˚ to a maximum speed with minimum displacement and torque at about 7.5˚. Both fixed- and variable-displacement models are available.
Rotary abutment motors have abutment A, which rotates to pass rotary vane B, while second abutment C is in alternate sealing engagement with the rotor hub (Fig. 9). Torque is transmitted directly from the fluid to the rotor and from the rotor to the shaft. Timing gears between the output shaft and rotary abutments keep the rotor vane and abutments in the proper phase. A roller in a dovetail groove at the tip of the rotor vane provides a positive seal that is essentially frictionless and relatively insensitive to wear. Sealing forces are high and friction losses are low because of rolling contact.
A screw motor essentially is a pump with the direction of fluid flow reversed. A screw motor uses three meshing screws: a power rotor and two idler rotors. The idler rotors act as seals that form consecutive isolated helical chambers within a close-fitting rotor housing. Differential pressure acting on the thread areas of the screw set develops motor torque.
The idler rotors float in their bores. The rotary speed of the screw set and fluid viscosity generates a hydrodynamic film that supports the idler rotors, much like a shaft in a journal bearing to permit high-speed operation. The rolling screw set provides quiet, vibration-free operation.
Selecting a Hydraulic Motor
The application of the hydraulic motor generally dictates the required horsepower and motor speed range, although the actual speed and torque required may sometimes be varied while maintaining the required horsepower. The type of motor selected depends on the required reliability, life, and performance.
Once the type of fluid is determined, the selection of actual size is based on the expected life and the economics of the overall installation on the machine. A fluid motor operating at less than rated capacity will provide a service life extension more than proportional to the reduction in operation below the rated capacity.
The maximum horsepower produced by a motor is reached when operating at the maximum system pressure and at the maximum shaft speed. If the motor is always to be operated under these conditions, its initial cost will be lowest. But where output speed must be reduced, the overall cost of the motor with speed reduction must be considered to optimize the overall drive installation costs.
Sizing a Hydraulic Motor
As an example of how to calculate hydraulic motor size to match an application, consider the following: an application calls for 5 hp at 3,000 rpm, with an available supply pressure of 3,000 psi and a return line pressure of 100 psi; the pressure differential is 2,900 psi. The theoretical torque required is calculated from:
T = (63,0252 × hp)/N
T is torque, lb-in., and
N is speed, rpm.
For the condition T = 105 lb-in., motor displacement is calculated as:
D = 2π T ÷ ∆PeM
D is displacement, in.3/rev
∆P is pressure differential, psi, and
eM is mechanical efficiency, %.
If mechanical efficiency is 88%, then D is 0.258 in.3/rev.
Calculating the required flow:
Q = DN/231eV,
Q is flow, gpm, and
eV is volumetric efficiency, %.
If volumetric efficiency is 93%, then Q is 3.6 gpm.
Pressure in these equations is the difference between inlet and outlet pressure. Thus, any pressure at the outlet port reduces the torque output of a fluid motor.
The efficiency factor for most motors will be fairly constant when operating from half- to full-rated pressure and over the middle portion of the rated speed range. As speed nears either extreme, efficiency decreases.
Lower operating pressures result in lower overall efficiencies because of fixed internal rotating losses that are characteristic of any fluid motor. Reducing displacement from maximum in variable-displacement motors also reduces the overall efficiency.
Hydraulic Motor Malfunctions
Most motor problems are caused by improper fluid, poor maintenance, or improper operation. The motor is no different than any of the other components of the hydraulic system. Primarily, it must have clean fluid, in adequate supply, and of the proper quality and viscosity. A poor maintenance program runs a close second in causing major problems. Typical slips in a program include:
• Failure to check and repair lines and connections to stop leaks: faulty connections can allow dirt and air into the system, lower pressure, and cause erratic operation.
• Failure to install the motor correctly: Motor shaft misalignment can cause bearing wear, which can lead to lost efficiency. A misaligned shaft also can reduce the torque, increase friction drag and heating, and result in shaft failure.
• Failure to find the cause of a motor malfunction: If a motor fails, always look for the cause of the failure. Obviously, if the cause is not corrected, failure will recur.
Finally, exceeding a motor’s operating limits promotes motor failure. Every motor has design limitations on pressure, speed, torque, displacement, load, and temperature.Excessive pressure can generate heat because of motor slippage and cause the motor to exceed torque limits.Excessive speed can heat and wear bearings and other internal parts.Excessive torque can cause fatigue and stress to bearings and the motor shaft, especially on applications that require frequent motor reversing. Excessive load can create bearing and shaft fatigue. And, excessive temperature can decrease efficiency because the oil becomes thinner and can produce rapid wear because of lack of lubrication. |
Timeline of the Nuclear Age
Events of the...
- Wilhelm Roentgen of Germany, while conducting experiments with cathode rays, accidentally discovers a new and different kind of ray. These rays were so mysterious that Roentgen named them "x-rays." He received the first Nobel Prize in Physics in 1901 for this discovery.
- French physicist Antoine Henri Becquerel's experiments led to the discovery of radioactivity. He observed that the element uranium can blacken a photographic plate, even though separated from it by glass or black paper. He also observed that the rays that produce the darkening are capable of discharging an electroscope, indicating that the rays possess an electric charge.
- J. J. Thomson of Britain discovers the electron, while also studying cathode rays. He received the Nobel Prize in Physics in 1906 for this discovery.
- Ernest Rutherford discovers two kinds of rays emitting from radium. The first he calls alpha rays; the more penetrating rays he calls beta rays. |
Expert Systems/Introduction to Expert Systems
AI research has been one of the most frenzied areas of computer science since the inception of the discipline. However, despite the massive effort and money that has gone into research, computers are still unable to perform simple tasks that humans do on a regular basis. Many researchers believed that a comprehensive system of logic would enable computers to successfully complete high-level reasoning tasks that humans can perform. However, logical computer programs require knowledge on which to base decisions. Converting human knowledge into a form that is both meaningful and useful for a computer has proven to be a difficult task.
Expert systems are an area of AI research that attempts to codify the knowledge and reasoning processes of a human expert into a computer program.
How Expert Systems Work
Expert systems interact with another entity, such as a human user or an application, to discover information about a problem, and evaluate possible solutions. The most simple form of an expert system is a question-and-answer system, where a human user is presented with questions. The user answers these questions, and those answers are used to further the reasoning process of the expert system.
Uses of Expert Systems
Expert systems are used for problems where there is incomplete data about a subject, and insufficient theory available for the creation of an algorithmic solution. Some problems, such as medical diagnosis, are not easily solved with an algorithm, but instead require reasoning and induction.
Numerical algorithms are more efficient than expert systems, and are typically more exact. However, many problems are not suited to being easily modeled mathematically, and in these cases numerical algorithms are not possible. Other AI techniques, such as artificial neural networks are suited for problems where there is very little theory but a wealth of experimental data.
Expert systems tend to be slow, and often require extensive human interaction. However, well-designed expert systems can be very rigorous, and some expert systems have been shown to outperform the human experts that helped to develop them.
Shortcomings of Expert Systems
Expert systems are based on human knowledge and reasoning patterns. This knowledge must be extracted from a human expert by a specialized knowledge engineer. Knowledge engineers ask the expert questions about his knowledge and his reasoning processes, and attempts to translate that into a computer-readable format known as a knowledge base. Expert systems generated in this way will be flawed if the information received from the expert is flawed, or if it is incorrectly translated by the knowledge engineer.
Expert systems, because they are focused on a single problem area, tend to fail catastrophically if presented with a problem or information that is outside their domain. |
What is the canopy theory?
Question: "What is the canopy theory?"
Answer: The canopy theory seeks to explain the reference in Genesis 1:6 to “the waters above the firmament,” assuming that “firmament,” or “expanse,” as the Hebrew word is alternatively translated, refers to our atmosphere. According to the canopy theory, there was a canopy of water above the atmosphere until the cataclysm of Noah’s day, at which point it disappeared either by collapsing upon the earth or dissipating into space. It is presumed to have consisted of water vapor because a canopy of ice could not have survived the constant bombardment of celestial objects like meteoroids which perpetually barrage the earth’s atmosphere.
While Genesis 1:20 (KJV) does say that birds fly in the firmament, suggesting the earth’s atmosphere, it also says that the sun, moon and stars reside there (Genesis 1:14-17), suggesting the entire sky from the earth’s surface outward, where birds fly and celestial objects reside. The Hebrew word alternatively translated “firmament” in some translations and “expanse” in others is raqiya. It appears nine times throughout the first chapter of Genesis (in verses 6-8, 14-18 and 20) and eight more times throughout the rest of the Old Testament (in Psalms, Ezekiel and Daniel).
According to Genesis, before there was air or land or any form of life, the earth was a formless mass of primordial water. On the second day of creation, God created the raqiya, placing it in the midst of the water, thereby separating it into two parts: “the waters above the firmament [raqiya]” and the waters below it. The waters below the raqiya He named “sea” (yam in Hebrew) and the raqiya itself He named “heaven,” “air” or “sky,” depending on your translation of the Hebrew word shamayim. But Genesis does not provide a name for the waters above the raqiya, nor is there any water above our atmosphere today, assuming that raqiya does mean “atmosphere.”
Advocates of the canopy theory once speculated that the collapse of such a vapor canopy might have provided the water for the heavy rains which inundated the earth during Noah’s flood. One problem with the canopy theory, however, is, the latent heat of water and the sheer quantities of water involved. If such a vapor canopy were to collapse into rain, it would literally cook the entire planet. This is because when water converts from vapor to liquid, energy or latent heat is released in the process, causing the surrounding area to heat up; this is known as an exothermic result. Conversely, when water converts from solid form—ice—to liquid or from liquid to vapor, energy is absorbed and the surrounding area is cooled—an endothermic result.
The Genesis account calls for five-and-a-half weeks of constant rain. If a canopy consisting of enough water vapor to provide that amount of rain were to collapse, it would cook the entire planet. This is not to say that there was no vapor canopy or that it did not collapse, only that, if it did, it could not have provided the amount of rain in question (the less water, the less heat).
It is interesting to note that, if a frozen canopy were able to exist in the atmosphere despite cosmic bombardment, its collapse into liquid rain would have an extreme cooling effect and might be an explanation for the commencement of the Ice Age. Despite the fact that we know that it happened, the complex factors involved in getting an Ice Age started makes it seem impossible and baffles modern science to this day. Advocates of the canopy theory also cite the existence of a canopy as a possible cause for a variety of pre-flood anomalies, including human longevity and the apparent lack of rain or rainbows. They claim that such a canopy would filter out much of the cosmic radiation that is harmful to humans and cause the lack of rain or rainbows. However, opponents dispute such a canopy’s ability to produce these results.
In defense of the view that raqiya means “atmosphere,” the reference in Genesis 1:14-17 to the sun, moon and stars residing there may have simply been a phenomenological statement, just as our modern terms “sunset” and “sunrise” are phenomenological descriptions. That is, we know full well that the sun is stationary and doesn’t really “rise” or “set,” despite our usage of terms implying its movement from our earth-bound vantage point.
Whatever the case may be, there is no canopy up there today and any suggestion that there was one in the past is speculation because there simply isn’t enough evidence one way or the other, except for the one enigmatic reference to waters above the firmament in Genesis 1:6, and no one claims to know for sure what that means.
Recommended Resources: The Case for a Creator by Lee Strobel and Logos Bible Software.
What does the Bible say about Creation vs. evolution?
What is the Intelligent Design Theory?
What is Progressive Creationism and is it biblical?
What was the firmament in the Bible?
What does Creation "ex nihilo" mean?
Questions about Creation
What is the canopy theory? |
Stile Understanding Texts Book 10
The Stile Tray (sold seperately) is the key to the self-checking activities of the Key Stage 2 programme. Pupils answer the questions by simply placing each of the twelve numbered tiles on the appropriate square on the base of the tray. When all the tiles have been placed, they close the tray, turn it over, and reopen it to reveal a geometric pattern. If the answers ?are all correct, the pattern will match the one printed at the top of the exercise. There are many different accompanying workbooks for use with the Stile Tray, covering Literacy, Maths and Dyslexia. A collection of 96 funny, factual and fictional exercises. This book offers a systematic approach to the teaching of key aspects of Literacy for children aged 7-11. The materials are also suitable for the practice and reinforcement of key skills with older children with additional and special educational needs. Concentrating on understanding and interpreting texts these passages and questions will reinforce and develop key comprehension skills. Children can access text in a meaningful and appropriate way, developing both their independence and their confidence.
• 16-pages (246 x 168mm)
• Teacher’s notes.
Book content is the same as in our Stile Literacy programme.
Starter Stile Tray sold separately: ACMT00705 |
The Paleolithic Age
The Paleolithic Age defines the time period when humans hunted and gathered their sustenance. This means of making a living necessitated fairly sparse populations as the amount of consumable products provided by unmanipulated nature was seldom plentiful and was generally unreliable.
Spanning perhaps a million years, the Paleolithic Age encompassed the time period when humans began using tools to the time period about 9000 years ago when they started to farm and domesticate animals. Of course, the dates here must be mere conjecture. They are based mainly on archeological studies where bones have been found at campsites and grave sites. The bones have been subjected to carbon dating which is a process that examines the radio-active component in carbon-based life forms called carbon-14. Organic matter ceases to renew these carbons when death occurs. Carbon-14 decays at a rate of about half its strength every 5730 years. The length of time since the death of the animal or human can be estimated up to about 50,000 years.
The Paleolithic Age is not a part of recorded history as writing was not invented until long after the subsequent Neolithic Age began. However, it does form the antecedents of ancient history. There are several attributes peculiar to humans that allowed them to be able to advance to the hunter-gatherer stage of development. This had much to do with brain size and the opposable thumb. The ability to sit up and walk upright also allowed early humans to put these two advantages to use. Tools and weapons could be employed because hands were not necessary to transportation. Tools proved to be a huge benefit in hunting and gathering.
Social structures, usually based on the family and tribe, provided a means to leverage work and enabled these people to take on bigger animals. The groups also provided protection in the face of competition for resources and natural disasters. They also allowed the transfer of knowledge from one generation to the next. The manner in which tools were made or a fire started could be vital for the survival and advancement of the tribe. The advent of language, whenever it came about, added to this process.
It is thought that people of this period began to build rough lean-tos. They certainly lived in caves. With a central base of operations the food gathered could be stored, which helped to regularize diets and prevent the uneven availability of food. Containers were made to store the food. Drying was used as a primary means of food preservation. Grains could be especially amenable to this treatment. The skins of animals were used as protection from the cold and other elements. Even a crude form of sewing to link animal skins together came about during this period.
Tools could be made from a variety of substances including wood, stone, bones, and shells. Trade between different areas developed during the latter part of the Paleolithic Age. In this manner tool-making technologies probably migrated across peoples and regions. Neanderthals were known to have been skilled in creating tools which utilized flaked stones and spears.
Fishing as a means of hunting is thought to have come into existence by 22,000 B.C. Toward the end of the Paleolithic Age, objects beyond tools began to be made, including jewelry and instruments. Paintings found in caves, as well as carvings, were made that may have had religious significance or may have been used to tell stories.
Life expectancy during this period is hard to determine. However, modern estimates, based on archeological digs, put the average age at death at about 30 for people who managed to live past childhood. A very small percentage would live to old age.
A period of transition occurred, beginning about 10,000 B.C. It is sometimes referred to as the Mesolithic period. It was marked by a refinement in tools and an increasingly stationary way of life, often tied with fishing, near rivers and oceans. It was in the Tigris and Euphrates river valleys, in the region of present day Iraq, that agriculture was invented. It probably sprang from people who harvested the seeds of grasses that are the ancestors of our current wheat and barley grains. They probably found that they could plant areas with the grain and increase the yield per acre. It would be many millenia before these techniques would spread throughout the world. Even so, it would spark new social structures, making possible cities, governments, and civilization.
The Neolithic Age: When Agriculture Began |
I’ve reproduced articles from DeBow’s Review occasionally; it was a magazine of agricultural advice originally, but by the time of the war it had become one of the primary defenders of slavery. In January, 1861, DeBow published a particularly famous article, arguing that the non-slaveholders of the South should defend slavery. The full text is available in its original form here, or transcribed here. It’s particularly noteworthy, because the fact that many non-slaveholders fought for the South is sometimes used as evidence that the war was not about slavery. DeBow is making the opposite case here; that slavery was worth fighting for, even for non-slaveowning Southern whites.
ART. VI.-THE NON-SLAVEHOLDERS OF THE SOUTH:
THEIR INTEREST IN THE PRESENT SECTIONAL CONTROVERSY
IDENTICAL, WITH THAT OF THE SLAVEHOLDERS.
DeBow starts by taking issue with figures purporting to show that slaveholders are a small minority of white Southerners. He argues that, if you include all members of households who own slaves, they would be around 2.25 million people, or about 1/3 of the (white) population of the South. Half the slaveholders have fewer than five slaves.
Assuming the published returns, however, to be correct, it will appear that one half of the population of South Carolina, Mississippi, and Louisiana, excluding the cities, are slaveholders, and that one third of the population of the entire South are similarly circumstanced. The average number of slaves is nine to each slaveholding family, and one half of the whole number of such holders are in possession of less than five slaves. He compares this with the 1 in 3.5 families in New England who own agricultural land. He argues that if slaveholders and nonslaveholders are natural antagonists, why wouldn’t landowners and nonlandowners in New England be even more so?
Still, he says the non-slaveholders have a serious interest in slave ownership. Who are these non-slaveholders?
The non-slaveholders of the South may be classed as either such as desire and are incapable of purchasing slaves, or such as have the means to purchase and do not, because of the absence of the motive-preferring to hire or employ cheaper white labor. A class conscientiously objecting to the ownership of slave property does not exist at the South: for all such scruples have long since been silenced by the profound and unanswerable arguments to which Yankee controversy has driven our statesmen, popular orators, and clergy. Upon the sure testimony of God’s Holy Book, and upon the principles of universal polity, they have defended and justified the institution! The exceptions, which embrace recent importations in Virginia, and in some of the Southern cities, from the free States of the North, and some of the crazy, socialistic Germans in Texas, are too unimportant to affect the truth of the proposition.
The non-slaveholders are either urban or rural, including among the former the merchants, traders, mechanics, laborers, and other classes in the towns and cities; and among the latter, the tillers of the soil, in sections where slave property either could or could not be profitably employed.
As the competition of free labor with slave labor is the gist of the argument used by the opponents of slavery, and as it is upon this that they rely in support of a future social conflict in our midst, it is clear that in cases where the competition cannot possibly exist, the argument, whatever weight it might otherwise have, must fall to the ground.
All the businessmen in Southern cities depend on the productivity of slave labor for their customers’ welfare. As for farmers who produce things slave labor can’t be used for,
those commodities are consumed by slaves and slaveowners, so they’re a market for the non-slaveholding farmers also. As for direct competition in production of the same crops, only black people are made to withstand work in the Southern heat:
The competition and conflict, if such exist at the South, between slave labor and free labor, is reduced to the single case of such labor being employed, side by side, in the production of the same commodities, and could be felt only in the cane, cotton, tobacco, and rice fields, where almost the entire agricultural slave labor is exhausted. Now, any one cognisant of the actual facts, will admit that the free labor which is employed upon these crops, disconnected with, and in actual independence of, the slaveholder, is a very significant item in the account, and whether in accord or in conflict, would affect nothing, the permanency and security of the institution. It is a competition from which the non-slaveholder cheerfully retires when the occasion offers, his physical organization refusing to endure that exposure to tropical suns and fatal miasmas which are alone the condition of profitable culture, and any attempt to reverse the laws which God has ordained, is attended with disease and death. This the poor white foreign laborer upon our river-swamps and in our Southern cities, especially in Mobile and New-Orleans, and upon the public works of the South, is a daily witness of.
He then goes through tables of wages in the South and North, showing that free laborers in the South are paid better. Also,
2. The non-slaveholders, as a class, are not reduced by the necessity of our condition, as is the case in the free States, to find employment in crowded cities, and come into competition in close and sickly workshops and factories, with remorseless and untiring machinery. They have but to compare their condition, in this particular, with the mining and manufacturing operatives of the North and Europe, to be thankful that God has reserved them for a better fate. Tender women, aged men, delicate children, toil and labor there from early dawn until after candle-light, from one year to another, for a miserable pittance, scarcely above the starvation point, and without hope of amelioration. The records of British free labor have long exhibited this, and those of our own manufacturing States are rapidly reaching it, and would have reached it long ago, but for the excessive bounties which, in the way of tariffs, have been paid to it, without an equivalent by the slaveholding and non-slaveholding laborer of the South. Let this tariff cease to be paid for a single year, and the truth of what is stated will be abundantly shown.
Furthermore, Southern workers don’t have to compete with the “foreign pauper labor which has degraded the free labor of the North” as the South has few immigrants. It’s a little confusing to me why the immigrants don’t head South, where the wages are so much better. Anyway, the South doesn’t have all these “isms” either:
Our people partake of the true American character, and are mainly the descendants of those who fought the battles of the Revolution, and who understand and appreciate the nature and inestimable value of the liberty which it brought. Adhering to the simple truths of the Gospel, and the faith of their fathers, they have not run hither and thither in search of all the absurd and degrading isms which have sprung up in the rank soil of infidelity. They are not Mormons or Spiritualists; they are not Owenites, Fourierites, Agrarians, Socialists, Freelovers, or Millerites. They are not for breaking down all the forms of society and of religion, and of reconstructing them; but prefer law, order, and existing institutions, to the chaos which radicalism involves. The competition between native and foreign labor in the Northern States has already begotten rivalry, and heart-burning, and riots, and led to the formation of political parties, which have been marked by a degree of hostility and proscription to which the present age has ntot afforded another parallel. At the South we have known none of this, except in two or three of the larger cities, where the relations of slavery and freedom scarcely exist at all. The foreigners that are among us at the South are of a select class, and, from education and example, approximate very nearly to the native standard.
Most important, white non-slaveholders in the South always have someone to look down on:
The non-slaveholder of the South preserves the status of the white man, and is not regarded as an inferior or a dependant. He is not told that the Declaration of Independence, when it says that all men are born free and equal, refers to the negro equally with himself. It is not proposed to him that the free negro’s vote shall weigh equally with his own at the ballot-box, and that the little children of both colors shall be mixed in the classes and benches of the schoolhouse, and embrace each other filially in its outside sports. It never occurs to him that a white man could be degraded enough to boast in a public assembly, as was recently done in New-York, of having actually slept with a negro. And his patriotic ire would crush with a blow the free negro who would dare, in his presence, as is done in the free States, to characterize the father of the country as a “scoundrel.” No white man at the South serves another as a body-servant, to clean his boots, wait on his table, and perform the menial services of his household! His blood revolts against this, and his necessities never drive him to it. He is a companion and an equal. When in the employ of the slaveholder, or in intercourse with him, he enters his hall, and has a seat at his table. If a distinction exists, it is only that which education and refinement may give, and this is so courteously exhibited as scarcely to strike attention. The poor white laborer at the North is at the bottom of the social ladder, while his brother here has ascended several steps, and can look down upon those who are beneath him at an infinite remove!
Furthermore, non-slaveholders aspire to buy slaves, or at least their kids might be able to. Besides, lots of non-slaveholders have been influential in Southern politics — e.g. the “McDuffies, Langdon Cheeves, Andrew Jacksons, Henry Clays, and Rusks, of the past; the Hammonds, Yanceys, Orrs, Memmingers, Benjamins, Stephens, Soules, Browns of Mississippi, Simms, Porters, Magraths, Aikens, Maunsel Whites, and an innumerable host of the present”.
Slavery brings prosperity, he argues. Brazil has lots of slaves and is prosperous, while Haiti and Jamaica have been impoverished since abolishing slavery.
Lastly, if the slaves are freed, the slaveholders will be wealthy enough to emigrate, while poor whites will have to stay and submit to the “degrading equality which must result”.
In Northern communities, where the free negro is one in a hundred of the total population, he is recognized and acknowledged often as a pest, and in many cases even his presence is prohibited by law. What would be the case in many of our States, where every other inhabitant is a negro, or in many of our communities, as, for example, the parishes around and about Charleston, and in the vicinity of New-Orleans, where there are from twenty to one hundred negroes to each white inhabitant? Low as would this class of people sink by emancipation in idleness, superstition, and vice, the white man compelled to live among them would, by the power exerted over him, sink even lower, unless, as is to be supposed, he would prefer to suffer death instead.
He concludes that nonslaveholders know that a Southern Confederacy is the only way to protect their rights. |
Özge Aydemir, Speaking Lesson - Telling the Time
Elementary (A1) level
By the end of the lesson, ss will have improved the accuracy and fluency of their speaking skill in the context of asking/telling the time in everyday conversations.
By the end of the lesson, ss will have been exposed to the TL of "telling the time" via specific information listening tasks.
Procedure (39-49 minutes)
T tells the class to watch the short video about Mr. Bean's morning routine and then talk in their pairs about the similarities/differences between his and their own morning routine. Ss watch the video and talk in pairs. There is W/C F/B.
T displays the HO. T tells the class to look at the clock and write the times individually. Tmonitors. T tells the class to check answers in pairs. T plays the track, ss listen to the answers. The answer key is displayed on the board via OHP. T drills examples chorally and individually.
T sets the task and then gives the HO. Ss listen and look at the times. T elicits meaning of "nearly", "just after", "almost" and "about" by using a model clock. T asks possible CCQs, e.g. "When it's nearly 3, is it a little before or after 3?", "When it's just after 5, is it much later than 5?" T drills examples chorally and individually.
T names the ss as "A"s and "B"s. T sets the task, tells the ss to complete the missing times on the clocks in the HOs by exchanging information and models a split personality example. T uses the possible ICQs at this stage; "Do 'A's and 'B's have the same clocks?, Will you show your paper to your partner?, etc." T gives the HOs for "A"s and "B"s Ss work in pairs and use the TL to complete the missing times. T monitors. As F/B ss look at each other's papers and check their answers.
First, T pairs the ss up based on the time they get up in the morning. T guides the ss to mingle and ask/answer the times they get up, and order themselves accordingly. The ss sit back in this order. Next, T models the task with the W/C. T draws a clock, indicates the time and asks the time. T elicits the answer from the ss. Then, T names the ss as "A"s and "B"s and tells the ss to work in their pairs and ask/answer (A draws and asks, B answers). T asks the ICQs; "Who is drawing A or B?, Who is telling the time A or B?" T monitors and asks them to switch roles. T monitors and takes notes for delayed correction.
T ECDWs the word "hurry". Ss listen to the conversations and complete the blanks. Ss check answers in pairs. T monitors. Ss listen again if necessary. T tells the ss to find the AK under their chairs. Ss practice the dialogues and switch roles. T monitors and takes notes for the delayed F/B.
T arranges the seating as two circles. T demonstrates the example dialogue with the W/C. T asks the ss to ask the times of their daily activities and find out the most stressed person according to the answers. Then, there is W/C F/B.
T boards examples of errors and good usage of English and elicits the correct form of the errors from the ss. T drills the sentences. T uses backchain drilling. |
Make Reading to Your Child a Special Time
Bedtime stories are a favorite ritual for parents and young kids. Snuggling in bed together while you turn pages, point to favorite pictures, and teach your child to recognize objects and words is both a learning and a bonding experience. Research has shown that reading to a child starting at an early age can actually enhance IQ. Here are some ways you can make the most out of reading to your child.
It is never too early to start reading to your child. Infants can begin to show interest in black and white contrasted patterns and shapes as young as two months of age. Sit with your baby on your lap and hold a developmentally appropriate picture book 12 to 18 inches away from baby. Trace the patterns with your fingers. Use your voice to hold baby’s interest. You will find your baby’s attention span is very short in the early months, so don’t push baby to stay focused. Keep it fun.
Make It a Routine
As baby grows through 4 to 6 months of age she will enjoy reaching out, grasping and interacting with activity books. Make this a daily routine so your child becomes used to it and even grows to expect it. Find a time when baby is in a quiet, relaxed mood. As baby gets older, show her how to manipulate the objects correctly, such as zipping zippers, pulling off Velcro objects and putting them back on, undoing buttons, touching baby’s face in a mirror and making crinkle or rattle sounds.
Talk It Up
Be sure to continuously engage your baby’s attention with your voice. “Turn the page,” “Where’s the ball?,” “Open the door” and any other phrases that apply to your book will really work your child’s mind. As they grow through the second year of life you can introduce the concept of colors when reading to your child. Pick one or two colors to focus on and repeat them over and over. And there’s no better way to learn animals and animal sounds than by pointing them out in a book and making the sounds with your child.
Keep It On Your Child’s Level
When you begin reading storybooks, don’t just read the words. Make up the story yourself in a way that holds your child’s interest and imagination. Reading books that match your child’s favorite movie or TV characters will really get your child excited about reading. As your child gets older, start reading the story words as you deem appropriate to your child’s age. Simple phrases such as classic nursery rhymes are a perfect place to start.
Reading To Your Child Through The Years
Reading to your child is a timeless activity you should continue through the preschool and elementary years as long as possible. Read a chapter each night to your older child and have your child read a chapter to you in turn. This keeps you involved with your child’s imagination. You will not only be fostering a love for reading, which will have academic benefits, but you will be creating memories for your child to look back upon fondly so that they will in turn share this experience with their own kids someday. |
Five years ago, NASA’s Todd Ensign printed his first 3D model. He immediately understood the technology’s appeal — how the tool could turn an idea into reality — and what students could do with it if given the chance.
Now, he’s taking that spark of interest and teaching educators how to share the possibilities of 3D printing with students in K–12 classrooms. He hopes opening that door of creation for students might be a turning point for their careers.
"They can invent anything,” Ensign says. “They’ve designed it, and then through the printer, it's like it's coming alive before them.”
Ensign works at NASA’s Independent Verification and Validation Center in Fairmont, W.Va., testing alternative materials for improved aviation designs — custom wings, rocket caps and more. At NASA, 3D printing technology is called additive manufacturing, and it’s been a game-changer for his industry, Ensign says.
But, he says, the technology could have an equally transformative effect on education, if put into the right hands.
"I want students doing this as often as possible because it really captures their imagination,” he says.
Partnerships for Education
In April, Ensign taught a one-day workshop on 3D printing that combined the resources of a local industry, the minds of local educators and lessons on aviation from NASA. He trained 30 teachers from the region on how 3D printing models could be designed and used for K–12 lessons.
During the seminar, educators tapped into NASA’s Museum-in-a-Box series of lessons on aviation and physics to create concepts that could be used on a monster 3D printer, courtesy of the Robert C. Byrd Institute for Advanced Flexible Machinery (RCBI).
Reinforcing STEM-based lessons like these will be crucial in preparing the next generation for modern technology jobs, says RCBI CEO Charlotte Weber.
Successful completion of the course allows teachers to borrow a $1,000 kit of STEM-based learning activities from NASA. The class also empowers teachers to bring student-created models to life using the site’s printer, which is something Ensign always loves seeing.
Word spread quickly about the workshop’s success, and another session planned for the fall is already booked up, he says.
A Boost to STEM
"It's such a compelling tool, I feel like it could achieve our goal to generate more student enrollment in STEM careers,” Ensign said. “We are facing a huge shortage in those careers as the baby boomers are retiring.”
While Ensign’s workshop was limited to West Virginia, he sees the potential for this kind of program across the country. Access to 3D printing technology is growing. Schools, businesses and even public libraries have 3D printers of their own. MakerBot has made education a priority, offering their Replicator 2 model to schools at a discounted price.
All that’s missing is the educational know-how.
Students and teachers can download thousands of free 3D models through Thingiverse, and MakerBot has its own classroom-based models ready for consumption — including a frog dissection kit and the Great Pyramid of Giza.
Learn more about NASA’s Museum-in-a-Box curriculum on its website. |
By Morgan Kelly, Office of Communications
Beyond the pounding surf loved by novelists and beachgoers alike, the ocean contains rolling internal waves beneath the surface that displace massive amounts of water and push heat and vital nutrients up from the deep ocean.
Internal waves have long been recognized as essential components of the ocean’s nutrient cycle, and key to how oceans will store and distribute additional heat brought on by global warming. Yet, scientists have not until now had a thorough understanding of how internal waves start, move and dissipate.
Researchers from the Office of Naval Research’s multi-institutional Internal Waves In Straits Experiment (IWISE) have published in the journal Nature the first “cradle-to-grave” model of the world’s most powerful internal waves. Caused by the tide, the waves move through the Luzon Strait between southern Taiwan and the Philippine island of Luzon that connects the Pacific Ocean to the South China Sea.
Combining computer models constructed largely by Princeton University researchers with on-ship observations, the researchers determined the movement and energy of the waves from their origin on a double-ridge between Taiwan and the Philippines to when they fade off the coast of China. Known to provide nutrients for whales and pose a hazard to shipping, the Luzon Strait internal waves move west at speeds as fast as 3 meters (18 feet) per second and can be as much as 500 meters (1,640 feet) from trough to crest, the researchers found.
The Luzon Strait internal waves provide an ideal archetype for understanding internal waves, explained co-author Sonya Legg, a Princeton senior research oceanographer in the Program in Atmospheric and Oceanic Sciences and a lecturer in geosciences. The distance from the Luzon Strait to China is relatively short — compared to perhaps the Hawaiian internal wave that crosses the Pacific to Oregon — and the South China Sea is relatively free of obstructions such as islands, crosscurrents and eddies, Legg said. Not only did these factors make the waves much more manageable to model and study in the field, but also resulted in a clearer understanding of wave dynamics that can be used to understand internal waves elsewhere in the ocean, she said.
“We know there are these waves in other parts of the ocean, but they’re hard to look at because there are other things in the way,” Legg said. “The Luzon Strait waves are in a mini-basin, so instead of the whole Pacific to focus on, we had this small sea — it’s much more manageable. It’s a place you can think of as a laboratory in the ocean that’s much simpler than other parts of the ocean.”
Legg and co-author Maarten Buijsman, who worked on the project while a postdoctoral researcher at Princeton and is now an assistant professor of physical oceanography at the University of Southern Mississippi, created computer simulations of the Luzon Strait waves that the researchers in the South China Sea used to determine the best locations to gather data.
For instance, Legg and Buijsman used their models to pinpoint where and when the waves begin with the most energy as the ocean tide crosses westward over the strait’s two underwater ridges. Notably, their models showed that the two ridges greatly amplify the size and energy of the wave, well beyond the sum of what the two ridges would generate separately. The complexity of a two-ridge system was not previously known, Legg said.
The energy coming off the strait’s two ridges steepens as it moves toward China, evolving from a rolling wavelength to a steep “saw-tooth” pattern, Legg said. These are the kind of data the researchers sought to gather — where the energy behind internal waves goes and how it changes on its way. How an internal wave’s energy is dissipated determines the amount of heat and nutrients that are transferred from the cold depths of the lower ocean to the warm surface waters, or vice versa.
Models used to project conditions on an Earth warmed by climate change especially need to consider how the ocean will move excess heat around, Legg said. Heat that stays at the surface will ultimately result in greater sea-level rise as warmer water expands more readily as it heats up. The cold water of the deep, however, expands less for the same input of heat and has a greater capacity to store warm water. If heat goes to the deep ocean, that could greatly increase how much heat the oceans can absorb, Legg said.
As researchers learn more about internal waves such as those in the Luzon Strait, climate models can be tested against what becomes known about ocean mechanics to more accurately project conditions on a warmer Earth, she said.
“Ultimately, we want to know what effect the transportation and storage of heat has on the ocean. Internal waves are a significant piece in the puzzle in telling us where heat is stored,” Legg said. “We have in the Luzon Strait an oceanic laboratory where we can test our theoretical models and simulations to see them play out on a small scale.”
This work supported by the U.S. Office of Naval Research and the Taiwan National Science Council.
Matthew H. Alford, et al. 2015. The formation and fate of internal waves in the South China Sea. Nature. Article published online in-advance-of-print May 7, 2015. DOI: 10.1038/nature14399 |
- slide 1 of 5
The energy demands of the world are continuously increasing. Experts are worried about the future of power generation because there are not enough supplies of coal, water and gas to fulfill the needs of mankind in the long term future. Alternative sources of energy such as nuclear energy are being developed. Nuclear energy has several advantages over other sources of energy because it is not limited by space or location. In this article we will learn about nuclear power plants and some of the basic underlying concepts.
- slide 2 of 5
What is a Nuclear Power Plant?
As the name itself suggests, a nuclear power plant is a facility where nuclear energy is harnessed to generated electricity. For those of us who haven’t heard about this term, it may seem like a new concept since we usually hear of atomic and hydrogen bombs which use nuclear energy for large scale destruction. But the same power is used for constructive purposes in nuclear power plants
The basic underlying principle of a nuclear power plant can be understood from the equation of mass-energy equivalence which is stated as follows
E = ∆mc2
Where E is the amount of energy released when a change in mass occurs during a nuclear reaction. This equation may not seem very complicated to you, but as you know “c” represents the speed of light which is of the order of 3 lakh kilometers per second. Just imagine the amount of energy released even if a tiny amount of mass is converted into energy.
This gives an edge to nuclear power plants over conventional sources like coal or gas because it means freedom from geographical factors and parameters. Furthermore since the amount of fuel required is much less as compared to conventional sources of power generation, there is no need to have extensive storage facilities and transportation networks for the same amount of power generated.
- slide 3 of 5
Basic Nuclear Reactions
Nuclear reactions fall into two major categories: fission and fusion. Fission refers to the nuclear reaction where a heavy nucleus is broken into nuclei of intermediate atomic number. Fusion refers to the nuclear reaction wherein light nuclei get combined to form a new nucleus.
Energy can be either released or absorbed during the process depending on whether the final mass of the products is greater than or less than the initial mass of the reactants.
- slide 4 of 5
The Chain Reaction
The above mentioned types of reactions are not of much use for generating electrical energy on their own. We require something known as a controlled chain reaction if power is to be generated in a nuclear power plant. When fission is started in a nuclear material it could die out slowly, sustain itself constantly or develop into an uncontrolled reaction. The first and the last options are not useful for generation of electricity. It is only when we have a sustained reaction, that we can utilize nuclear energy in an effective manner
There are lots of other interesting things to be learnt about nuclear power plants regarding their working, layout, processes and so forth which we shall do in later articles in this series. |
Paddlefish are a primitive fish that have occurred in North America since the Cretaceous period, 65 million years ago. It is thought to have historically used Tippo Bayou, which runs through Tallahatchie NWR, as a spawning area.
Paddlefish can live up to 55 years (though average lifespan is 20-30), growing to be over seven feet long and up to 200 pounds. However, the average paddlefish will grow to five feet in length and 60 pounds. Like sharks, paddlefish have skeletons made of cartilage, not bone. Paddlefish are easy to identify, with long, flat blade-like extended upper jaws that are almost one-third of their entire body length. This is known as a rostrum. The underside of the "paddle," or rostrum, is covered with electrorecepetors which gather information about the surrounding environment. It is thought that their larger snouts help detect prey, direct plankton into the mouth, or facilitate migratory behavior.
As filter feeders, they have no teeth and instead use large gill rakers to strain zooplankton out of the water. They feed by swimming through the water with the mouth held wide open, scooping up tiny animals in the water (zooplankton). This is a rare behavior among freshwater fish.
Paddlefish are most often found in the deeper, slow-moving waters of backwaters, oxbows and other turbid river-lakes. They can be found in most large river systems throughout the Mississippi Valley and adjacent Gulf slope drainages in North America. It is a highly mobile species, sometimes traveling more than 2,000 miles within a river system.
Although these fish were once common in the rivers of the central U.S., they are now declining in population and distribution. Population declines are attributed to overharvest, sedimentation, pollution, habitat loss from river modifications, and increased competition from various species of introduced Asian carp.
Previous | Page 1 of 2 | Next
Follow Us Online
This small songbird can be identified by its yellowish chest and can be found in the old fields on our refuge. |
Chromotography: An Open Inquiry
This lab activity helps students start thinking in terms of the scientific process. After a short demonstration on chromatography, the students will brainstorm ways to make the experiment different. Students will also develop a new, testable question related to chromatography and write a procedure to gather data on their new question. From this process they will be able to draw conclusions about the experiment as a whole group.
1. This activity is designed for students to use critical thinking, experimental design and data analysis throughout a scientific investigation.
2. This activity is designed for students to use skills such as observation, questioning, laboratory techniques, and oral and written presentations.
Context for Use
Chromatography can be used in grades K-12. For the younger grades a simple demonstration can be shown for the visual learners. This can be easily done in one class period however, if you want to use it as part of an extension to scientific method it will take several class periods. Chromatography does not require any special equipment. Most of the equipment can be found in your storage cabinet. Since it is so easy, no special skills are require to be taught before the lesson.
Resource Type: Activities:Lab Activity
Grade Level: Intermediate (3-5)
Description and Teaching Materials
I am using this lesson to incorporate more inquiry in my lesson and to teach students to use the scientific method in their approach to solving questions. Since I will be teaching this lesson to fifth grade students, I expect them to be able to chose a variable and design an experiment that will solve their own question.
Materials needed for this activity include; water-based markers, filter paper, cups, pencils, tape, and water. Extra items to have for student driven experiments include; permanent markers (make sure they are not the primary colors), coffee filters, regular white copy paper, white construction paper, alcohol, and vinegar.
I intend to model the chromatography activity. To do this cut filter paper into strips approximately 1" wide. This will allow the students to get a good look and the inks in various markers.
- Ask the students, "What two colors make orange"
- At this age they should respond red and yellow.
- Ask the students, "How do we really know that there is red and yellow ink in the orange maker?"
Take all acceptable answers. Tell them that you are going to separate the orange ink into its two components to prove this hypothesis. Next draw a line across the filter paper approximately 1" above the bottom. For the demonstration use an orange water based marker. Then tape the filter paper to a pencil and hang it in the water so the filter paper is touching the water but that the mark is above the water level. Leave the filter paper in the water until the ink has been forced up the filter paper through capillary action. When the ink has separated out, take the filter paper out of the water and lay it on a clean piece of copy paper.
- Ask the students ,"Did the orange marker separate into the colors you thought."
Next ask them how they can change the experiment. At this point you should get several answers leading to many variables. Make a list of their responses on the board. When they have brain stormed as many answers as possible, tell them that they are going to repeat the experiment only they will chose one of their ideas off the board. Put the students into groups of four according to the variable they chose to test. At this point the students must write a hypothesis as to what they think will happen in their experiment. They must also plan and design an experiment to test their hypothesis.
At the end of the activity have all groups share their original hypothesis and results. Your questions will depend on the different variables that the students chose to test. They should have written a good hypothesis and procedure to test the hypotheses.
Teaching Notes and Tips
1. Since the students will decide how the experiment can be changed, be prepared to have several items on hand to accommodate the variables. However, my students will be limited to the everyday items that I keep on stock.
2. Whenever using solvents other than water make sure the students use goggles.
3. We have a no eating or drinking rule in the lab so this shouldn't be a problem but be prepared for anything.
4. The purpose of this experiment is to make sure the students use the scientific method to solve a question. Chromatography is easy but the purpose must be kept in mind.
Assessment for this activity will come from the student documented hypothesis and procedure recorded in their science journals. If they wrote a good procedure which tested the original hypothesis then their experiment should work as they planned. The end result is not the important part of this activity. Only assess their hypothesis and procedure to see if they used the scientific method to solve their question.
5.I.A.1 - Scientific Investigation
5.I.A.2 - Communication
5.I.B.1 - Controlled Experiment
5.I.B.2 - Scientific Investigation
References and Resources |
Uraninite is a uranium-rich mineral with a composition that is largely UO2 (uranium dioxide), but which also contains UO3 and oxides of lead, thorium, and rare earths. It is most commonly known in the variety pitchblende (from pitch, because of its black color, and blende, a term used by German miners to denote minerals whose weight suggested metal content, but whose exploitation was, at the time they were named, either impossible or not economically feasible). All uraninite minerals contain a small amount of radium as a radioactive decay product of uranium; it was in pitchblende from the Jáchymov (then also known as Joachimsthal) in Czechoslovakia that Marie Curie discovered radium. Uraninite also always contains small amounts of the lead isotopes, Pb-206 and Pb-207, the end products of the decay series of the uranium isotopes U-238 and U-235 respectively. Small amounts of helium are also present in uraninite as a result of alpha decay. Helium was first discovered on Earth in uraninite after previously being discovered spectroscopically in the Sun's atmosphere. The extremely rare element technetium can be found in uraninite in very small quantities (about 0.2 ng/kg), produced by the spontaneous fission of uranium-238.
Uraninite is a major ore of uranium. An important occurrence of pitchblende is at Great Bear Lake in the Northwest Territories of Canada, where it is found in large quantities associated with silver. Some of the highest grade uranium occurrences in the world occur in the Athabasca Basin in northern Saskatchewan. It also occurs in Australia, Germany, England, and South Africa, and in New Hampshire, Connecticut, North Carolina, Wyoming, Colorado and New Mexico in the United States. |
Published on January 17, 2008
Power Sources: Power Sources With thanks to nbsp.sonoma.edu Transferring Energy : Electric current is generated in a power plant, and then sent out over a power grid to your homes, and ultimately to your power outlets. Transferring Energy Power Plant: Power Plant Anything that generates energy: 1 – Wind Mills 2 – Hydroelectric 3 – Solar (Photovoltaic) 4 – Nuclear 5 – Coal, Oil, or Natural Gas 6 – Tidal Converters 7 – Biomass Converters 8 – Geothermal 9 – Ocean Thermal 10 – Solar (Heat Dish) Slide4: When an electric current flows through a wire it generates a magnetic field. A magnet can attract and move metal. Moving metal can be useful. How DO We Generate Energy? Example: Example The magnetic pull causes the armature to spin. If a Fan blade is attached to that armature then you get a cool breeze! Power In Slide6: In this case, wind power applies a force to the blades that turns them. The spinning blades, spin an armature that turns the wire relative to the magnetic field. As long as the blades spin, electricity will be generated! How DO We Generate Energy? Power Out Generator & Turbine: Generator & Turbine Typically something spins the turbine, which spins the magnet in the coil, which creates a current. Two Terms: Two Terms Watt – The amount of energy something can supply. Efficiency – How much actual energy a material supplies vs. its ideal potential. Common Non-renewable Power Sources: Common Non-renewable Power Sources Hydrocarbon Burning (Coal, Oil, and Natural Gas), And Nuclear Slide10: Fossil Fuels comes from the long held, although perhaps inaccurate, belief that natural gas, oil, and coal are the products of dead dinosaurs and plants (hence fossil) Hydrocarbons (Fossil Fuels) Slide11: Hydrocarbon is the term for anything made up of hydrogens and carbons (i.e. oil, natural gas, etc). Hydrocarbons (Fossil Fuels) Hydrocarbons (Fossil Fuels): Standard Large Power Plants Provide 500 MW – 1000 MW of energy and twice that in thermal waste. Hydrocarbons (Fossil Fuels) Coal or Oil powered plants are 30 – 40% efficient Slide13: To power one standard light bulb for a year would require you to burn 714 pounds of coal. Hydrocarbons (Fossil Fuels) A typical 500MW coal power plant produces 3.5 billion kWh per year. That is enough energy for 4 million of our light bulbs to operate year round. To produce this amount of electrical energy, the plant burns 1.43 million tons of coal. How it Works: How it Works Oil, or Coal, or Natural Gas is burned. The heat boils water, or the hot gas itself turns a turbine. Where Does a Hydrocarbon Come From?: Where Does a Hydrocarbon Come From? Oil Fields and Coal Mines Slide16: Where Does a Hydrocarbon Come From? Oil Fields and Coal Mines Problems with Hydrocarbons: Problems with Hydrocarbons Very low efficiency Very high pollution (CO2, Sulfurs, Acid Rain, Heat, etc.) Ecological disruption Resource Wars! Limited Resource? The Nukes!: The Nukes! The use of radioactive material to produce energy! The Nukes!: Plant electrical output 1220 MW Plant efficiency 34% Approximately 100 reactors in the United States Produce 22% of our electricity The Nukes! How Nuclear Power Works: How Nuclear Power Works Radioactive material heats up water. The steam from the water turns a turbine. Problems with Nuclear Power: In normal operations a nuclear reactor produces some environmental emissions (i.e. the escape of radioactive material through cracks in the system). The possibility of a Core Meltdown, like in Chernobyl Problems with Nuclear Power Slide22: Continuous Cooling: Even after shutdown there is enough power for a meltdown if cooling water is not supplied Problems with Nuclear Power Terrorism & Nuclear Proliferation: This 5.3kg ring of plutonium is enough for a modern nuclear weapon. Problems with Nuclear Power: Problems with Nuclear Power Nuclear power is a limited resource just like Fossil Fuels. Natural/Passive/Renewable Power Sources: Natural/Passive/Renewable Power Sources Wind, Hydroelectric, Solar (Heat), Solar (Photovoltaic), Because they are made from either easily renewable resources or can run completely off of the Earth’s natural cycles. Biomass Converters, Geothermal, Ocean Thermal, and Tidal Wind Power: Wind Power Attains ~ 50% efficiency Windmill’s average energy output depends on the amount of wind present and the size of the windmill. Wind farms tend to generate between ½ and 1 MW Annual Average Wind Power Density at 50m: Annual Average Wind Power Density at 50m Problems With Wind Power: Wind variability Basic energy Storage Dangerous to Birds Can be noisy Can be Ugly! Problems With Wind Power Hydroelectric Power: Conversion from potential energy of water to electric energy is at ~70% or higher! Hydroelectric projects in the United States have rated capacities from 950 – 6,480 MW Hydroelectric Power Slide29: Hydroelectric Power The Hydrologic Cycle: The Hydrologic Cycle Water evaporates, rains down onto the land, then runs down rivers where it can push a water wheel and power a generator! Problems With Hydroelectric Power: Problems With Hydroelectric Power About 50% of the United States potential for hydroelectric energy has been tapped. Thus, further advances are unlikely. The Wild and Scenic River Act and the Endangered Species Act have inhibited development of some sites Slide32: Problems With Hydroelectric Power Silt (sand) collection in hydroelectric Dam storage volumes over time causes maintenance issues, as well as environmental concerns The loss of free flowing streams and land due to flooding behind the Dam disturbs the life of species (e.g. Salmon) Possibility of Dam failure Solar (Heat) Dish Power: Solar (Heat) Dish Power This relies on the same principles of evaporation and condensation. The Sun heats up some water, which rises as steam and turns a turbine. This in turn generates electricity via a standard magnet coil generator. Finally, the water condenses and falls back down to repeat the cycle. Slide34: Solar (Heat) Power Problems The output depends on the amount of sunshine that an area gets. Therefore, areas with little sunlight will not be able to utilize this. More often, this technology is used to heat water for a home rather than power it. Slide35: Solar (Heat) Dish Power Slide36: Solar Photovoltaic Cells Efficiency Between 6% and 40.7% depending on the type and usage of the cell. The ~ avg. is 15%. Actual total power output depends on the size of the cell. The larger the cell, the more light it collects, the more power it generates and the more it costs to make. Solar (Photovoltaic) Cells: Solar (Photovoltaic) Cells On a bright, sunny day, the sun shines approximately 1,000 watts of energy per square meter of the planet's surface. The process of using semiconductors to convert solar light into electrical energy. Slide38: How Solar Cells Work When light hits certain materials it energizes it, usually in the form of heat. However, when the object is a semiconductor, particularly Silicon doped with Phosphorous. The light “knocks” electrons loose that can then flow through the material. Slide39: Solar Cells Problems Only good where there is lots of sunlight. Requires the mining of silicon which can require lots of energy and increases the costs of computers. The panels are ugly Biomass Converters: Biomass Converters The burning of recently living materials or the byproducts of them. Palm or Olive Oils Wood Corn, Husks, Flaxseed, Grasses, Leaves Manures Slide41: Biomass Converters These work just as every other burned fuel: They turn a turbine! Ethanol is very inefficient, 3/4 of a gallon of fuel is required to produce one gallon of ethanol. Whereas gasoline is about 1/20 for 1. Some biomass material is used to create bio-fuel: Ethanol from corn. Problems with Biomass: Problems with Biomass Inefficient (worse than hydrocarbons). Require a widespread cultural change to be effective (not everyone has access to cow poop!) Ethanol cannot be transported long distances like gasoline. Produce similar pollution to oil and coal. Geothermal Energy: Geothermal Energy This is the process of taking heat energy from the earth to power turbines and generate energy. GEO THERMAL GEO = EARTH THERMAL = HEAT Slide44: Geothermal Energy The Earth’s core produces huge amounts of heat, partially due to radioactive activity underground. This heat often rises to the surface in the form of lava or hot water springs (geysers). Geothermal: Geothermal The heat from the Earth can be used to drive a turbine and thus spin a generator. Steam is let out of fractures in the ground as it rises it pushes the fan blades of the turbine and spins it Slide46: Geothermal Geothermal power plant in the Philippines Slide47: Right now the plants are running at about 15MW to 65MW. Although 100MW plants are planned. Geothermal Problems The only real problem with Geothermal energy is that it can only be used where there is sufficient thermal/volcanic activity. Slide48: Geothermal Ocean Thermal: Ocean Thermal The differences in temperature of various layers or locations of water in the ocean is harnessed to create electricity via a turbine. Slide50: Ocean Thermal Warmer water is used to boil a working fluid like ammonia. The ammonia boils and the vapors turn a turbine. Slide51: Ocean Thermal Colder water cools off the ammonia vapor which condenses back to the bottom. Slide52: Problems with Ocean Thermal Right now the plants only pump out about 1MW at their best. The plants can only be built near the tropics. The system is not cost-competitive (too expensive) with other power systems). Tidal: Tidal The ever changing Tides are used to push a turbine that is placed into a box. Slide54: Tidal Method 1 It works a lot like a hydroelectric plant mixed with a windmill – tide goes in as the currents flow and spins a turbine that generates a current. Tidal Method 2: Tidal Method 2 Another method is to use a box that fills with water at high tide. The box has a hole at the top above a turbine. When the water rises it pushes air out that turns a turbine. When it falls it pulls air in through and turns a turbine! They can get from ½ MW to 240 MW depending on the size of the project and ~80% efficiency. Problems with Tidal Generators: Problems with Tidal Generators Tidal power schemes do not produce energy all day but rather for 6 to 12 hours. As the tidal cycle is based on the rotation of the Earth with respect to the Moon, and the demand for electricity is based on the Earth’s day (24 hours), the two don’t always jive. It can kill fish too. Now That You Know: Now That You Know What power option do you think is the BEST for us here in Michigan? Why? |
Galilee Diary: Uncertainty Principle
How did they examine the witnesses? The pair that arrived first they examined first. And they brought in the elder of the two and said to him: Relate how you saw the moon: in front of the sun or behind the sun? To the north of it or to the south of it? How high was it? And how wide was it? If he said, In front of the sun, his statement was worth naught. And then they brought in the second one and examined him. If their statements were found to agree, their evidence stood… The head of the court said, It is sanctified! And all the people answered after him, It is sanctified. Whether it was seen at its proper time, or whether it was not observed at its due time, they proclaimed it sanctified.
-Mishnah Rosh HaShanah 2:6-7
The details of the lunar cycle have been understood for thousands of years. Ancient observers made precise observations, and kept exact records, and because they cared more than we generally do about the movements of the heavenly bodies, they knew more than most of us do about them. Thus, while the Mishnah (around 200 CE) goes into great detail about the procedure for recruiting and examining witnesses to the exact moment of the “birth” of the new moon each month, this ritual was largely symbolic, because the rabbis knew quite precisely when the moon would be new. By the 4th century, this ritual had fallen into disuse, and the calendar was “fixed,” calculated and published, so that everyone could know, to the second, just when each new moon would fall, for all time. The Muslims, interestingly, continue to determine their calendar by observation, so there remains a small range of uncertainty each year regarding the beginning and end of the month of Ramadan – and occasional disagreements between different authorities.
The margin of error, even under a system of observation, can’t be more than a day, because the cycle goes on even if we don’t manage to observe it; if witnesses couldn’t be found to pinpoint the moment of “birth” of the moon, it would be declared by default on the day after; i.e., the moon was obviously “born,” on time, whether we saw it or not. But this variation of one day matters if it determines the timing of holy days; for example, if you get the date of the first of Tishrei wrong, then you’ll get Yom Kippur wrong, and end up eating on Yom Kippur and fasting when it’s not. So the religious/political authority to determine the new moon was critical to keeping the people united. Once there was a Diaspora, there was a natural power struggle between Israel and the far-flung communities – and a clear understanding that if there were no central calendrical authority, the communities would drift apart. Hence the Mishnah goes on to describe the system for disseminating the date of the official new moon, by beacons and later by runners, to the communities of Babylonia. Since sometimes the news took a long time to arrive, the law developed of observing the three Torah-based festivals – Sukkot, Pesach, and Shavuot – for two days, to cover both options regarding their possible date if a firm determination of the preceding Rosh Chodesh was not received in time (Purim and Chanukah are not Torah-based; Yom Kippur simply can’t be observed for two days). To this day, traditional Jews observe these festivals for two days in the Diaspora, whereas in Israel, the original source of the observation, one day is enough. Rosh HaShanah, however, has always remained two days even in Israel – it is the only festival that falls itself on Rosh Chodesh, so the uncertainty about its observation was a factor even in Jerusalem, without the problem of delay of transmission. Therefore, the second day of this holiday is not a Diaspora custom, but inherent in the day itself.
In 19th century Europe, many Jews rebelled against second day festival observance, as impractical and irrational and obsolete. Ultimately the lines were drawn according to the new denominations, with Reform rejecting the second day, and Orthodoxy refusing to question it.
Interesting: If holidays are wonderful, why wouldn’t one want to double them? But clearly we don’t I think most traditional Jews would agree that the single day holiday observance is an advantage enjoyed by those who live in Israel, a liberation for those making aliyah. The distinction, I guess, still has the effect of implying that in Jewish life, there is a center – and there is a periphery. Is there? |
As scientists battle toin the high-profile realm of electronic components, researchers working on of this two-dimensional, atom-thin sheet of carbon are also making some interesting discoveries.
Results are counterintuitive because graphite — the form of carbon from which graphene originally derived — increases metallic corrosion when in contact with metals
For instance, a coating of graphene has been shown to increase copper’s resistance to corrosion by 100 times. To put this discovery into context, study co-author Dr Mainak Majumder explains: "At this point we are almost 100 times better than untreated copper. Other people are maybe five or six times better, so it's a pretty big jump."
The team used standard chemical vapour deposition to coat their research materials with the ultra-thin film of graphene. In their paper, they describe the results as "counterintuitive", because graphite — the form of carbon from which graphene was originally derived — increases metallic corrosion when in contact with metals.
Discovery's immediate importance
Experimentalist Dr Parama Banerjee, who described graphene as a "magic material", said the discovery would be of immediate importance to coastal communities, where the effects of salt water are well known.
"In nations like Australia, where we are surrounded by ocean, it is particularly significant that such an atomically thin coating can provide protection in that environment," Dr Banerjee said.
The researchers have so far only investigated the effect of coating copper, but are now expanding their range and looking at other metals too. The applications would be wide-ranging: from ships, to the food industry and even electronics — anywhere metals are at risk of corrosion.
The research, carried out at Monash University in Australia and Rice University in the USA, is published in the September edition of the journal Carbon. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.