content
stringlengths 275
370k
|
---|
Most bacteria rely on binary fission for propagation. Conceptually this is a simple process; a cell just needs to grow to twice its starting size and then split in two. But, to remain viable and competitive, a bacterium must divide at the right time, in the right place, and must provide each offspring with a complete copy of its essential genetic material. Bacterial cell division is studied in many research laboratories throughout the world. These investigations are uncovering the genetic mechanisms that regulate and drive bacterial cell division. Understanding the mechanics of this process is of great interest because it may allow for the design of new chemicals or novel antibiotics that specifically target and interfere with cell division in bacteria.
Before binary fission occurs, the cell must copy its genetic material (DNA) and segregate these copies to opposite ends of the cell. Then the many types of proteins that comprise the cell division machinery assemble at the future division site. A key component of this machinery is the protein FtsZ. Protein monomers of FtsZ assemble into a ring-like structure at the center of a cell. Other components of the division apparatus then assemble at the FtsZ ring. This machinery is positioned so that division splits the cytoplasm and does not damage DNA in the process. As division occurs, the cytoplasm is cleaved in two, and in many bacteria, new cell wall is synthesized. The order and timing of these processes (DNA replication, DNA segregation, division site selection, invagination of the cell envelope and synthesis of new cell wall) are tightly controlled.
Some Unusual Forms of Reproduction in Bacteria:
There are groups of bacteria that use unusual forms or patterns of cell division to reproduce. Some of these bacteria grow to more than twice their starting cell size and then use multiple divisions to produce multiple offspring cells. Some other bacterial lineages reproduce by budding. Still others form internal offspring that develop within the cytoplasm of a larger "mother cell". The following are a few examples of some of these unusual forms of bacterial reproduction.
Baeocyte production in the cyanobacterium Stanieria
Stanieria never undergoes binary fission. It starts out as a small, spherical cell approximately 1 to 2 µm in diameter. This cell is referred to as a baeocyte (which literally means "small cell"). The baeocyte begins to grow, eventually forming a vegetative cell up to 30 µm in diameter. As it grows, the cellular DNA is replicated over and over, and the cell produces a thick extracellular matrix. The vegetative cell eventually transitions into a reproductive phase where it undergoes a rapid succession of cytoplasmic fissions to produce dozens or even hundreds of baeocytes. The extracellular matrix eventually tears open, releasing the baeocytes. Other members of the Pleurocapsales (an Order of Cyanobacteria) use unusual patterns of division in their reproduction (see Waterbury and Stanier, 1978).
Budding in bacteria
Budding has been observed in some members of the Planctomycetes, Cyanobacteria, Firmicutes (a.k.a. the Low G+C Gram-Positive Bacteria) and the prosthecate Proteobacteria. Although budding has been extensively studied in the eukaryotic yeast Saccharomyces cerevisiae, the molecular mechanisms of bud formation in bacteria are not known. A schematic representation of budding in a Planctomyces species is shown below.
Intracellular offspring production by some Firmicutes
Epulopiscium spp., Metabacterium polyspora and the Segmented Filamentous Bacteria (SFB) form multiple intracellular offspring. For some of these bacteria, this process appears to be the only way to reproduce. Intracellular offspring development in these bacteria shares characteristics with endospore formation in Bacillus subtilis.
In large Epulopiscium spp. this unique reproductive strategy begins with asymmetric cell division, see The Epulopiscium Life Cycle Figure. Instead of placing the FtsZ ring at the center of the cell, as in binary fission, (A) Z rings are placed near both cell poles in Epulopiscium. (B) Division forms a large mother cell and two small offspring cells. (C) The smaller cells contain DNA and become fully engulfed by the larger mother cell. (D) The internal offspring grow within the cytoplasm of the mother cell. (E) Once offspring development is complete the mother cell dies and releases the offspring. |
On a wintry day late last year, I visited the Museum of Natural History in New York City. While touring the geology wing, I came across this boulder-sized chunk of a rock formation:
It was out in the open with no ropes or glass around it, inviting visitors to touch it. I brushed a hand across its polished surface, which was as smooth and cool as a sheet of glass. Nothing about that touch hinted at the stone’s age or history; yet it had traveled down immense vistas of time to come here, to our era, so that I could see and touch it on that day. And in the moment of that touch, I knew, I as a modern Homo sapien was briefly reunited with predecessors ancient beyond imagining, perhaps some that date back almost to the origin of life on Earth itself.
The curious, gorgeously colored strata of this stone are called banded iron formations. The dark bands are layers of metallic iron oxide compounds such as magnetite and hematite, while the reddish layers are silica-rich quartz minerals like chert, jasper and flint. Banded iron formations occur almost exclusively in very ancient rocks, and are common in strata dating to between 2.5 billion and 1.8 billion years ago. This is the period commonly called the Precambrian, although its more technical name is the Proterozoic Eon.
True multicellular life first appears in the fossil record at the very end of the Proterozoic, in the form of the bizarre and famous Ediacaran biota that would become the precursors of the Cambrian explosion. But for most of the Proterozoic, the most common fossils are stromatolites: puffy accretions of sedimentary rock laid down by vast colonies of bacteria.
The Earth in this eon was a different place. Most notably, from chemical and geological evidence, we know that its atmosphere had no oxygen. The only life was colonies of purple bacteria, making a living using the chain of chemical reactions called photosystem I, which converts light, carbon dioxide and hydrogen sulfide into sugar and releases sulfur as a byproduct. But the Proterozoic was when this began to change: this was the time when evolution invented photosystem II, the more advanced version of photosynthesis that uses water and carbon dioxide to make sugar, liberating oxygen as a byproduct. This is the very same set of reactions that sustains all green plants, and ultimately all animal life, today, two and a half billion years later.
At first, oxygen was an annoyance to Proterozoic life, but it soon became a menace. Unlike today, there were no oxygen-breathing animals to expire carbon dioxide and close the cycle, and so it quickly built up in the atmosphere as photosynthetic bacteria spread and thrived. To us, it’s the breath of life, but to these bacteria, it was a deadly toxin.
At the same time, another process was taking place. Weathering of the Earth’s primordial rocks had been releasing iron, most of which washed down to the sea and ended up as iron ions dissolved in the oceans. Until then, that iron had had nothing to react with, but when it encountered oxygen, the two chemically combined into iron oxides like magnetite and hematite. These compounds are insoluble, and when they formed, they precipitated out and sank to the ocean bottom, gradually building up those dark silver layers.
With iron reactions steadily removing oxygen from the atmosphere, anaerobic bacteria thrived for a time. But eventually, there was no more free iron. Once that point was reached, oxygen started to build up in the atmosphere. Heedless, the bacteria kept churning it out – until a toxic tipping point was reached, and the Earth’s atmosphere was changed to such an extent that it became poisonous to Earth’s life. The consequence was mass death among the planet’s abundant bacterial colonies – an oxygen holocaust that knocked life back down to nearly nothing. Only a few anaerobes survived, in isolated nooks and crannies where the deadly gas did not reach.
After this catastrophe, the planet would have seen several million years of relative quiet. In this life-poor era, layers of silica minerals were deposited on the ocean floor. But in the meanwhile, erosion continued to free up iron atoms, which slowly scrubbed the atmosphere and oceans of oxygen. Eventually, the world was cleansed, and life bounced back, spreading from its refuges to once again cover the planet. Of course, this exuberance contained the seeds of its own downfall – bacteria still spewed out the waste oxygen that they could not abide – and the cycle repeated, not just once but many times. Each time, a layer of iron oxides was deposited, followed by a layer of iron-poor silicates in the aftermath. And that leads me back to the Natural History Museum, on that cold winter day where I stood and brushed a hand across a banded iron formation.
Looking at this stone, you get some idea of the dizzying vistas of geological time, as well as the turmoil that life has endured to reach the present day. Each of those colorful red and silver layers represents what was, in its own era, a disaster beyond imagining, one that reset life to its starting point. Each of those layers, as well, is a silent testament to life’s tenacity in the face of overwhelming odds. Of course, the cycles of growth and destruction did not last forever. Eventually, evolution found a way, as evolution nearly always does, and oxygen was tamed to become a power source in an entirely new metabolic cycle. The oxygen-breathers arose, the remaining anaerobes retreated to the deep crevices of rocks and the sea, and life found a new equilibrium, with the balance of the atmosphere permanently changed. All the oxygen we breathe today is biologically produced, a tangible proof of life’s power to reshape its own world.
As well, these banded iron formations may be a metaphor for our own foolhardiness. In our time, we too are changing the composition of the planet’s atmosphere, this time through the release of greenhouse gases. In the process, we are becoming the first species since the ancient photosynthetic bacteria to have such a global effect. The danger we face may not be as severe – but it is severe enough. Those bands of iron are not only a record: they are a warning of what happens when life reshapes its own environment without thought for the consequences. |
The and operator combines two conditions into one, which will be true if both sides are true, and false otherwise. You can create these conditions with the relational operators =, ≠, >, ≥, <, and ≤, with functions such as isPrime(), pxlTest(), and ptTest(), or with any other expression that returns 'true' or 'false'. Other operators for dealing with conditions are or, xor, and not.
:2+2=4 and 1=0 false :2+2=4 and 1+1=2 true
The operator can also be applied to integers, treating them as 32-bit signed integers (larger integers will be truncated to fit) expressed in binary. The bits will be matched up, and "and" will be applied to the bits individually — a bit in the result will be 1 if the two corresponding bits of the original integers were 1, and 0 otherwise.
:(0b11111100 and 0b00111111)▶Bin 0b111100 :1000 and 512 512
In complicated logical expressions (both with conditions and with integers), "and" has greater priority than the others ("or" and "xor"). For instance, X or Y and Z will be interpreted as X or (Y and Z).
60 - Argument must be a Boolean expression or integer happens when the data type is incorrect (or mismatched). |
Childhood Sleep Problems
Childhood insomnia is estimated to occur in 20 to 30% of infants, toddlers, and preschoolers. If untreated, approximately 80% will continue to experience problems with falling or staying asleep when followed three years later. It not only results in significant problems such as irritability, inability to get up for school and learning problems, but also has been associated with parental consequences such as sleep deprivation, maternal depression, and marital problems.
Insomnia in children usually presents either with a refusal to go to bed known as Limit Setting Insomnia or frequent awakenings requiring the parent’s presence to return to sleep, referred to as Sleep Onset Association Insomnia.
There are several strategies available to deal with this problem. The first is to develop a set sleep/wake schedule for your child and stick to it. A calming bed time routine that includes bathing, dressing for sleep, and a bedtime story can be helpful. Eliminating television and vigorous physical activities near bedtime is also important. Putting your child to bed when drowsy and not asleep is important for your child to develop self-soothing skills that will help them to fall asleep.
If this fails, there are several other behavioral techniques. One called unmodified extinction, also called “crying it out,” entails putting your child to bed and not responding to their cries or pleadings. Another called “camping out,” is much the same but you stay in the room. Most parents find this very hard to do, but a recent study published in the journal Pediatrics that followed children for up to five years found no adverse physical or emotional consequences when these techniques were used. Another more gentle technique called graduated extinction involves you responding at ever-increasing time intervals nightly. An example might entail responding to the child’s crying at five minute intervals the first night and increasing this by five to ten minutes on successive nights. Another technique involves delaying bedtime in order to increase the likelihood of the child’s falling asleep on their own. Once accomplished, bedtime is slowly advanced back to a more appropriate time.
The idea behind all of these techniques is to avoid rewarding negative sleep behaviors while reinforcing the positive behavior of allowing children to put themselves to sleep. Early intervention can result in a better night’s sleep for both you and your child. |
For thousands of years, people have wanted to move on the water. They have used boats and ships to fish, to travel, to explore, to trade or to fight. Throughout the time that people have been building boats and ships, they have made changes to them, to make travelling on the water easier, faster and safer.
There were two different ways of building a ship. The shell method was the oldest way. A shell is made first, and then strengthened with planks. It means that the builders work from the outside in.
The frame-first method involves working from the inside out. A wooden frame is built first, and then planks are nailed to it. Boats and ships built like this are stronger, and better able to stand up to a long sea voyage.
No-one knows exactly when the first boat was invented. Long ago, people probably discovered that they could keep themselves afloat by clinging onto fallen logs or bundles of reed. Gradually, they learnt how to hollow out logs to make rafts. Dug-outs and rafts meant that people could cross water without getting wet. They could also carry things and animals.
The first ships
Egyptians were among the earliest ship builders. The oldest pictures of boats that have ever been found are Egyptian, on vases and in graves. These pictures, at least 6000 years old, show long, narrow boats. The boats were paddled along. They were mostly made of papyrus reeds. The Egyptians used their ships to trade with other countries around the Mediterranean sea.
Between 1200 and 900 BC, the Greeks and the Phoenicians began to build up their sea trade. They used galleys, both as merchant ships for trading, and as warships. The Phoenicians made many long sea journeys, but stayed quite close to the coast. One of the places they sailed to was Cornwall, looking for tin. Their fighting galleys were powered by rowers, sitting in one, two or three lines. Galleys continued to be used as late as the 18th century. The main weapon of the galley was a ram, a pointed piece of wood fixed to the front, or bow of the ship. The ram was crashed at fast speeds into the side of the enemy ship. The ships also carried archers and men with spears. Sometimes the galleys were fitted with a mast and one square sail, but they were taken down during battles.
The Viking longship
Vikings thought ships were very special so they tried to make them look beautiful, by carving decorations on them. Using longships, the Vikings set out from Scandinavian countries like Norway and Denmark every summer and raided other countries. Britain was one of the places that they raided. They also used their ships for trading. Viking ships had one square sail made of wool, and a row of oars on each side. There was a steering oar at the back on the right-hand side. The ship was built by the shell method, and the planks overlapped, which is called clinker building. Gaps between the planks were stuffed with animal hair to keep the water out.
Vikings did not have compasses. They worked out their directions by studying the stars and the sun. They also remembered the landmarks, birds and sea creatures they saw on their voyages. Life on board was hard, there was no cabin for sheltering in bad weather, so Viking ships could not be used in winter. Even so, Vikings made extremely long journeys in their ships. Some even sailed as far as America. Vikings sometimes used their ships as grave ships. The body of someone important would be placed inside and then the whole ship would be buried.
Medieval sailing ships
In medieval times, ships in the northern part of Europe began to change. Ships began to be built with straight sternposts instead of curved ends. Sailors found it was easier to steer ships if the steering oar was fixed onto the sternpost. This stern rudder made even the heaviest boat easier to steer. Ships were built using the frame-first method so they were stronger. Fighting platforms called castles were built high up at the front and the back of the ship for archers and stone-slingers. Ships needed to be strong and roomy enough to carry large cargoes.
To make the ships sail faster, more masts and sails were fitted. In the 14th century a larger trading ship was developed called the carrack. This was carvel built (the planks did not overlap) and had three masts. There were square sails on two masts and a triangular sail on the mast at the back. Carracks that were used as warships were armed with great guns. In the 16th century, holes called gunports were cut in the sides of the ship for the cannon to fire through. By the time that carracks were being used, sailors had the compass and other instruments like astrolabes to measure the height of the sun or the North star. By using these, sailors could work out their latitude, or north-south position, so finding their way became much easier.
Life on board ship was still very hard. It was crowded, damp and dirty. Sailors often suffered from diseases, like scurvy, which was caused by not eating enough fresh fruit and vegetables. The main daily foods were salt meat and ship's biscuit.
Ships of Nelson's time
Some things about the ships of Nelson's time had stayed the same for hundreds of years. The ships were still made of oak and were very strong. About 2000 trees were needed to build one warship. The planks of the ship were fixed edge-to-edge with wooden pegs called treenails. From 1783, navy ships were given a thin covering of copper to stop sea worms from eating holes in the wood. This meant ships could stay at sea for longer. There was lots of hard work to do on ships so they needed a large crew. The places where the sailors slept were damp and overcrowded, so there was still a lot of disease.
Ships built out of wood can not be built much longer than about 80 metres. The timber frames also take up quite a lot of space. In the 19th century, ship builders began using iron instead of wood. Iron ships could be much larger, with lots more space for carrying cargo. They did not need so much work to keep them in good condition. In the 1880s steel began to be used instead of iron. Ships also began to be fitted with steam engines. Steam engines were first used in paddle steamers. The engine turned two paddle wheels. Paddle steamers were not suited to the open sea because in heavy seas the waves lifted one wheel right out of the water while the other one went right under, and this strained the engines.
From the 1840s, screw propellers replaced paddle wheels in steamships. Propellers work much more efficiently and are still used on most ships today. Steam ships still had some other problems. A great deal of coal was needed to travel even fairly short distances. On a voyage to a distant part of the world, there might not be anywhere to collect more coal. For this reason, ships continued to be fitted with sails even though they carried engines. Today, diesel or steam-turbine engines use fuel much more efficiently. |
Kirkwood gaps, interruptions that appear in the distribution of asteroids where the orbital period of any small body present would be a simple fraction of that of Jupiter. Several zones of low density in the minor-planet population were noticed about 1860 by Daniel Kirkwood, an American mathematician and astronomer, who explained the gaps as resulting from perturbations by Jupiter. An object that revolved in one of the gaps would be disturbed regularly by the planet’s gravitational pull and eventually would be moved to another orbit.
Click anywhere inside the article to add text or insert superscripts, subscripts, and special characters.
You can also highlight a section and use the tools in this bar to modify existing content:
Add links to related Britannica articles!
You can double-click any word or highlight a word or phrase in the text below and then select an article from the search box.
Or, simply highlight a word or phrase in the article, then enter the article name or term you'd like to link to in the search box below, and select from the list of results.
Note: we do not allow links to external resources in editor.
Please click the Websites link for this article to add citations for |
Joshua’s long day
Did it really happen—and how?
The key question in any discussion about the meaning of difficult Bible passages is: What did the author intend to convey? Joshua records in great detail the occupation of Canaan by Israel and the allotment of the land among the tribes, around 1400 BC, so the author is obviously writing a historical account of what happened. The occasion of the long day was during a battle between the combined armies of the five Amorite kings and the army of Israel, early in the campaign.1 With the help of God, the Israelites were winning the battle and needed more time on this day to complete the victory.
Joshua 10:11–13 reads: ‘And it came to pass, as they fled from before Israel, and were in the going down to Beth-horon, that the Lord cast down great stones from heaven upon them unto Azekah, and they died … Then spake Joshua to the Lord in the day when the Lord delivered up the Amorites before the children of Israel, and He said in the sight of Israel, Sun, stand thou still upon Gibeon; and thou, Moon, in the valley of Ajalon. And the sun stood still, and the moon stayed, until the people had avenged themselves upon their enemies. Is not this written in the book of Jasher?2 So the sun stood still in the midst of heaven, and hasted not to go down about a whole day.’
It appears to have been midday or after (Hebrew: sun in the midst of the sky).3 And the author is telling us that the sun did not proceed to set for a period of a completed day, which many commentators take to be approximately a 24-hour period, rather than just a daylight period.
Many cultures have legends that seem to be based on this event. For example, there is a Greek myth of Apollo’s son, Phaethon, who disrupted the sun’s course for a day. And since Joshua 10 is historical, cultures on the opposite side of the world should have legends of a long night. In fact, the New Zealand Maori people have a myth about how their hero Maui slowed the sun before it rose, while the Mexican Annals of Cuauhtitlan (the history of the empire of Culhuacan and Mexico) records a night that continued for an extended time.4
It should also be noted that the Amorites were sun and moon worshippers. For these ‘deities’ to have been forced to obey the God of Israel must have been a devasting experience for the Amorites, and this might well have been the reason why God performed this particular miracle at that time, i.e. near the beginning of the occupation of the land of Canaan by the Israelites.5
Geocentrism and the language of appearance
Joshua’s command to the sun to stand still does not support geocentrism, i.e. the idea that the sun moves around the Earth. The Bible uses the language of appearance and observation.6
Today people do exactly the same thing. For example, scientists who prepare weather reports for TV announce the times of ‘sunrise and sunset’. In fact, the mention of the moon also standing still seems to confirm both the divine authorship of the account and the fact that it is the Earth which moves. Since all Joshua needed was extra sunlight, and most ancients believed the sun moves, not the Earth, a human author of a fictitious account would only have needed to refer to the sun stopping. (See also Bible skeptics answered Q&A (supposed errors and contradictions refuted).)
NASA and the missing day
A rumour surfaces from time to time that scientists ‘using computers’ at NASA to check planetary positions discovered that a day was ‘missing’ from history.
This story is an ‘urban myth’. The alleged research seems never to have been published—no wonder, because to make such a calculation one would need to know the planets’ positions before any missing day, as well as after. This is impossible.
Similar considerations apply to the book Joshua’s Long Day, written in 1890 by Charles Totten, purporting to prove that a day went missing, without reproducing his calculations. All such calculations can show only where the sun and moon should have been at any time in the past (based on where they are now, assuming the rates of movements have not changed), not where they actually were. (See also Astronomy and Astrophysics Q&A.)
What actually happened?
Suggested answers may be divided into three main categories:
Some form of refraction (bending) of the light from the sun and the moon. According to this view, God miraculously caused the sunlight and moonlight to continue in Canaan for ‘about a whole day’. Supporters of this view point out:7
- It was light that Joshua needed, not a slowing of the Earth.
- God promised Noah that ‘while the Earth remaineth … day and night shall not cease’ (Genesis 8:22). This could be seen to mean that God promised that the Earth would not stop rotating on its axis until the end of human history. (However, it would not seem to preclude a temporary slowing down of the Earth’s rotation.)
- Some form of light refraction appears to have been what happened in the reign of Hezekiah when the shadow on Ahaz’s sundial retreated ten degrees (2 Kings 20:11)—an event that appears to have occurred only in the land of Palestine (2 Chronicles 32:31).
A wobble in the direction of the Earth’s axis of rotation.
This involves a precession8 of the axis of the Earth, wobbling slowly so as to trace an ‘s’-shaped or circular path in the sky. Such an event could have made it appear to an observer that the sun and the moon were standing still, but need not have involved any actual slowing of the rotation of the Earth.
One suggestion was that this was caused by the orbits of the Earth and Mars being close together on this date.1 One problem is that these authors postulate an ancient orbit for Mars different from its present one, and there is no proof that this ever happened. Other suggested causes have included impacts of asteroids on the Earth.
A slowing of the Earth’s rotation.
According to this view, God caused the rotation of the Earth to slow down so that it made one full revolution in about 48 hours rather than 24. Simultaneously God stopped the cataclysmic effects that would have naturally occurred, such as monstrous tidal waves. Some people have objected to this on the erroneous assumption that, if the Earth slowed down, people and loose objects would fly off into space. In fact, the apparent centrifugal force (tending to throw things off the Earth) is only about one-three-hundredth of the gravitational force. If the Earth stopped rotating (whether suddenly or not), this outward ‘force’ would cease and we would actually be held more firmly by gravity.
The Earth at the equator moves at about 1,600 km/h (1,000 mph). The velocity needed to escape from the Earth’s gravity is about 40,000 km/h (25,000 mph). If the Earth was spinning as fast as this, we would all fly off into space anyway, regardless of whether the Earth stopped suddenly or not!
What about the momentum of people and objects travelling at 1,600 km/h on the Earth? Answer: A car travelling at 100 km/h can be stopped comfortably for the occupants in a few seconds; something travelling at 1,600 km/h could stop comfortably for passengers in a few minutes.
This scenario need only imply that God slowed the rotation of the atmosphere, oceans, and Earth simultaneously to prevent any tidal-wave effect, and any heat build-up inside the Earth due to friction from still-rotating liquid layers of the Earth’s core. And after the long day was over, the whole process would need to start up again.
It is certainly not impossible for God to have done all this, despite representing a major interruption of the natural order of things with respect to the Earth set up by God in Genesis 1.
Christianity is a religion of the miraculous—from God’s creative acts of Genesis 1 to the wonderful events of Revelation 22. The Bible does not tell us how any of these happen, other than that God wills them to happen and they do. He may use (intensify) some existing natural law (as in Noah’s Flood), or all participation of nature may be excluded (as in the Resurrection). Often the miraculous effect lies in the providential timing of natural events (as in God’s partition of the Red Sea by a strong wind that blew all night—Exodus 14:21).
Miracles rest on testimony, not on scientific analyses. While it is interesting to speculate on how God might have performed any particular Biblical miracle, including Joshua’s long day, ultimately those claiming to be disciples of Jesus Christ (who authenticated the divine record of the Bible) must accept them, by faith.9 There is not one logical, scientific reason to claim that, given a God powerful enough to create a universe in six days, Joshua’s long day ‘could not have happened’. Those who balk at this account are almost invariably those who have already rejected 6-day creation through compromise with evolution’s fictitious long ages, and have thus rejected the authority of the Bible.
References and notes
- Donald Patten, Ronald Hatch, Lorenc Steinhauer, The Long Day of Joshua and Six Other Catastrophies, Baker Book House, Michigan, 1973 give the date as ‘circa October 25, 1404 BC’. Other commentators give a slightly different date, e.g. C.A.L. Totten, July 22, 1443 bc. Return to text.
- The book of Jasher (KJV) or Jashar (some modern translations) was an ancient collection of poems written to honour Israel’s leaders (cf. 2 Samuel 1:17–27). Joshua’s words to the sun (which appear to be quoted from this book) are in poetic form and are printed in this way in most modern Bible versions. This use of poetry here does not invalidate a literal interpretation of the event, any more than those Psalms which describe events in David’s life invalidate the literalness of the events they poetically portray. In any case, verse 13b reverts to Hebrew prose to describe what happened in answer to Joshua’s prayer. Return to text.
- It would have made no sense early on the morning of a battle, with a whole day ahead, for Joshua to have prayed for a lengthening of the daylight. Return to text.
- Immanuel Velikovsky, Worlds In Collision, Dell, New York, 1950, p. 61 note 3. See also other historical references to long days or nights in this book. Return to text.
- Instead of, for example, using hornets (Exodus 23:28), or confusing the enemy (2 Kings 7:6). Return to text.
- In this connection, Henry Morris writes, ‘All motion is relative motion, and the sun is no more “fixed” in space than the Earth is. … The scientifically correct way to specify motions, therefore, is to select an arbitrary point of assumed zero velocities and then to measure all velocities relative to that point. The proper point to use is the one which is most convenient to the observer for the purposes of his particular calculations. In the case of movements of the heavenly bodies, normally the most suitable point is the Earth ‘s surface at the latitude and longitude of the observer, and this therefore is the most “scientific” point to use. David [Psalm 19:6] and Joshua are more scientific than their critics in adopting such a convention for their narratives.’—Henry Morris with Henry Morris III, Many Infallible Proofs: Practical and Useful Evidences for the Christian Faith, Master Books, Arizona, 1996, p. 253. Return to text.
- For example, John C. Whitcomb, ‘Joshua’s Long Day’, Brethren Missionary Herald, July 27, 1963, pp. 364–65. Return to text.
- Precession: the motion of the axis of rotation of a spinning body about a line that makes an angle with it, so as to describe a cone. Return to text.
- ‘To say that “miracles cannot happen” is not a scientific assertion. It is a faith statement on exactly the same level as when a Christian says that “Jesus performed miracles” ’—Hugh Silvester, ‘Miracles’, Eerdmans Handbook to Christian Belief, Michigan, 1982, p. 90. Return to text. |
The most accurate view of the color of a comet came from the European Space Agency's Rosetta spacecraft at the end of 2014. It was the first spacecraft to go into the orbit of a comet, and was able to send back to Earth the first true color images of the surface of a comet. The comet was called 67P/C-G.
Scientists had expected the comet to be gray, but for that gray to have a light blue hue because of the suspected presence of ice on the surface. According to the pictures sent back by Rosetta, this is not the case. Instead it is almost entirely dark gray.
The two tails that a comet has are the most visible parts from Earth. They are mostly made up of dust and gas. The ion tail is formed when ions are blown away from the comet surface by solar winds. This tail has a blue appearance. The other tail is the dust tail, which is made up of dust particles. It looks white and is much brighter.Learn more about Colors |
Definition - What does Polygon Mesh mean?
A polygon mesh is a collection of edges, faces and connecting points that is used to provide a polygon model for 3-D modeling and computer animation. Its geometric makeup can be stored in order to facilitate various kinds of simulation of three-dimensional renderings.
Techopedia explains Polygon Mesh
In a polygon mesh, each surface joins together through its boundaries and common edges. One example is a three-dimensional sphere made up of identical faces, like a soccer ball in which the faces themselves can be flat or curved. More complex polygon meshes render people, animals, and other complex shapes. Computer engineers use specific digital tools to build and store these models for animation.
The Digital Divide: A Technological Generation Gap
Join thousands of others with our weekly newsletter
Free Whitepaper: The Path to Hybrid Cloud:
Free E-Book: Public Cloud Guide:
Free Tool: Virtual Health Monitor:
Free 30 Day Trial – Turbonomic: |
Breakfast by Cameron and Ashley
We keep hearing how important it is to eat a good breakfast. We know some of our friends skip breakfast once in awhile. We wondered if eating breakfast really makes a difference in how well you do in school. Our question: Does skipping breakfast affect how well we do on tests?
What did we do?
We got all of our friends in class to help out. For three days in a row, we asked everybody to make sure they had breakfast. Then everybody took two tests: a short memory test and a number recognition test. We averaged everybody's score. Then, on the fourth day, half the class agreed to skip breakfast while the other half agreed to eat breakfast as usual. Everybody took the memory and number tests again. We looked for dramatic changes in the scores of the "no breakfast" group.
What did we find out?
Our results were a little surprising! On the first three days almost everybody kept the same score on the two tests. But on day four, everybody's number recognition test score jumped way up! And the people who skipped breakfast increased their score even more than those who ate breakfast! So does that mean skipping breakfast is good for you? We didn't think so. We were taking the tests in the morning, before anybody could get really hungry. We want to try the test again, this time taking the tests later in the day before lunch.
- There are a lot of factors that could affect your score on a memory test, like how much sleep you get, if you exercise and whether or not you eat healthy foods. Design an experiment to test just one of these factors.
- You can test your own short-term memory. Have a friend write a list of fifteen ordinary words on note cards. Have the friend show you each card, one at a time, for just one second for each word. After you've seen all the words, try to write down as many as you can. Make up a list for your friend to try.
- When are you most alert in school? First thing in the morning or after you've had a chance to wake up a bit? Pay some attention to how you are feeling each hour of the school day. If you know there is a time of day when you have a hard time focusing, try to make positive changes in your eating or sleep habits.
- Use this human body investigation as a science fair project idea for your elementary or middle school science fair! Then tell us about it! |
First general-purpose computers (from Wikipedia)
The Atanasoff–Berry Computer (ABC) was among the first fully electronic digital binary computing devices. Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry, the machine was not programmable in the modern sense, being designed only to solve systems of linear equations. The computer did employ parallel computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC computer derived from the Atanasoff–Berry Computer.
The inventor of the program-controlled computer was Konrad Zuse, who built the first working computer in 1941 and later in 1955 the first computer based on magnetic storage.
George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the “Model K” (for “kitchen table”, on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as “the first digital electronic computer” is difficult.Shannon 1940 Notable achievements include.
* Konrad Zuse’s electromechanical “Z machines”. The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world’s first operational computer.
* The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.
* The secret British Colossus computers (1943), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
* The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
* The U.S. Army’s Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse’s Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming. |
The New Colossus: Our Beacon to Immigrants
Lesson 12 of 12
Objective: SWBAT...compare and contrast the Statue of Liberty and the Colossus of Rhodes to determine the symbolization of Motherhood, Light and Immigrants and the overall theme of "The New Colossus" poem.
Creating the Purpose
My purpose for this lesson is to have students connect what they have learned in history about the colonists fights for freedom and liberty - to what they have learned in reading about immigrants and refugees - to what they learned about choosing words carefully and freedom of speech in this poetry unit to build understanding of the symbolism of our Statue of Liberty. The big idea is the evaluation of the wording and relevance of the poem "The New Colossus" and how this connects to the beliefs our government was established on.
There are a lot of components to this lesson and I want to keep students interactively involved in the learning, so I chose to begin this unit with an inquiry based question which asks them to compare the two statues.
I start the lesson with a sheet of paper and pencils on each of the table groups (I have 6) with a picture of two statues on each - The Colossus of Rhodes and The Statue of Liberty. I share that they are going to work as a group to identify things that are similar and different about the statues and then respond to the question at the bottom of their sheet:
What mood does each of the statues create for viewers (how does looking at them make you feel)?
I ask this because I want to open the discussion for why a man and woman were chosen to represent a people and how our viewpoints of each gender differs.
I give them 5 minutes to write their answers and then ask them to share out as teams some of their responses. As we add to the discussion students begin to make connections to more and more details.
I now share the Statue Facts and educate students on who, what and why these statues came to be built. I now share the objective that today we are going to read and evaluate the meaning of the poem, "The New Colossus" to determine the symbolism of the statue and the author's lesson.
Guiding the Learning
In this section I want to review to activate their prior knowledge about what we have learned about American history. I ask - "Why did we declare independence from the British in the Revolutionary War?". I take responses and then prompt them by sharing our forefathers believed that everyone should be welcome in the United States so they people from all countries come to live and work here. I remind them about when they created the welcoming brochures in our immigrant unit.
I introduce Emily Lazarus, a young female poet at that time. and share that she was so inspired by the statue that she wrote a poem titled, "The New Colossus" - I ask why she might have titled her poem?
I then pass out the The New Colossus student poem (I have underlined key words and phrases to help them focus on the meaning of smaller sections to build to the larger meaning). I share that I will be reading the poem to them and that as I do I want them to listen, write questions on parts that are confusing and underline any vocabulary that they find difficult to understand. (I'm looking for words like Yearn, Wretched, Teeming, Exiled, Pomp and Tempest).
As I read I do a The New Colossus Think Aloud identifying a few parts to help them build understanding so that they can respond to the next part of the lesson independently.
After the first read I ask students to share their unfamiliar words - I write these on the board and give students context clue sentences or use the words in a familiar way to help them build understanding and then write the definitions we come up with on the board.
I post the large copy of the poem and give students Post-it Notes. I instruct students that they are going to help us determine the meaning of the stanzas of the poem using these notes. As they read with their partners they will write down information that describes the meaning of the phrases that are underlined and add it to our chart. Together we will make a goal of defining all of the sections to learn the author's lesson in the poem. They are instructed that their goal is to add two or more kid-friendly definitions to the chart.
I model the first Post-it by sharing that in the first stanza the author states "brazen giant of Greek fame, With conquering limbs astride from land to land" - share that I know the picture of The Colossus of Rhodes is a Greek god and he is standing with his feet on two different land masses - so I think she is talking about this statue here. I add my Post-it with The Colossus of Rhodes to the chart.
After modeling how to use post-it notes to document their thinking, students begin to share their thinking about the author's message in the poem by adding Post-it notes to the chart.
I have students continue to work in small groups to navigate through the poem. I circulate through the classroom to help support students who need help figuring out the meaning of the poem.
As students finish I ask the Big Question - What was Emma Lazarus, the author's, message for readers in her poem? What was the most important thing she wanted them to know about America?
Closing the Loop
Student groups share their author’s messages and the sections of the poem that gave them the most evidence to support their positions.
I ask them what might be some reasons to chose a woman to represent our country and not a man? We share their thinking. I want students to make the connection that our doors were open in peacefulness for friends and family and mighty powers against anyone who threatened our people - just like a mother would protect her children. I end by sharing that the poem deals with the topic of immigration and how all friendly immigrants are welcomed by the Statue of Liberty which is a symbol for America.
I have students respond to the The New Colossus exit ticket question sheet defining the meaning of the imagery of motherhood, light and immigrants in the poem. I will use this to assess their independent levels of understanding which will determine which lesson I teach next in the unit. |
|laziness, impatience, and hubris|
Bit operations for beginnersby Cine (Friar)
|on Aug 08, 2003 at 13:51 UTC||Need Help??|
This is not really a Perl specific subject, but nonetheless a problem for people who are beginning to program.
There are 4 bit operations, and, or, xor and inverse, which in Perl is accessable as the character operators &, |, ^ and ~, because they are not to be confused with the actual and-, or- and not-operators, whom we will return to later. Because these operations are bit operations, they work on individual bits in a number, but as we shall see in Perl they also magically work on all the bits in strings.
Except for inverse they require two arguments also called
operands, a left operand and a right operand. For none of these
operations is it important which is the right or the left, they would
yield the same result reversed. inverse only requires a
The & operatorThis is the operator you use when you want to know need to know if two items both are true. true and false in bits are commonly respectively 1 and 0 or on and off
The truth schema, for and is
Thus if we have the numbers 234 and 15, in bit representation 11101010
and 00001111 and do 234 & 15. We then for each bit in the left
operand and it with the corresponding bit in the right operand and
check the schema every time. Thus the result is 00001010 or 10 in
Using a char as example instead, we have an "A" and "a", respectively 01000001 and 01100001 in ASCII. If we and them together, we get 01000001 or "A".
The | operatorThis operator check if either the right operand or the left operand is true, or if they are both true. Thus this is false, only when both operands are false.
The truth schema, for or is
Again, our examples with 234 and 15 or 11101010 and 00001111. The result is now 11101111 or 239. And the example with "A" and "a" or 01000001 and 01100001 in ASCII, result is 01100001 or "a".
The ~ operatorThe inverse operator only takes a single argument and simply reverses all bits. Thus all true values become false and all false values become true.
This operator is actually also called the not operator, but
most people associate that with the other operator with the same name,
the ! operator. The distiction is that ~ is the
"bitwise" not operator. The ! operator is used to
reverse a true value to false and not actually looking at the specific
bits. This is also how perl does it, everything else would surprise
people, imagine what would happen if !"1", did it
bitwise... 1 is 00110001, so !"1" would be 11001110 which in
iso8859-1 is "LATIN CAPITAL LETTER I WITH CIRCUMFLEX"... But in perl
!"1" is 0, which makes it false. Just to confuse the matter some more,
~ is also called the (1's) complement operator, or the
bitwise negation operator.
Unlike what you also may imagine, ~234 is not 00010101, but
11111111111111111111111100010101... This is simply because Perl's
representation of the number is much longer than a simple 8 bits. Also
this is on my machine and my compiled version of Perl, on other
machines or builds it may be longer or shorter.
Also notice that this does not work on a list. With @a=(1,0), print ~@a would not print "01", but will evaluate the length of the @a and invert that number.
The ^ operatorThe XOR operator is not like, the other operators known from the common language, mostly because it is a composition of more operations.
XOR means eXclusive OR. Expressed in other bit operations it is
(a&~b) | (~a&b). In more humane terms, it is this or
that, but not both.
You may wonder what this operator is used for, but if you think a little about it what is actually tells you is "are the operand identical?". In many cases we could have used == or eq to tell us that, but other times we actually need to know where and what the difference is. For example we have two very long strings "aaaa" and "aaba", and need to find where the difference is. The result of "aaaa"^"aaba" is "\0x00\0x00\0x03\0x00", we can then run tr/\0x00/1/c to get the number of bytes that are different and we can use index to then search for 1 in the string to find the actual place in the original string they differ.
The boolean operatorsBoolean operators are much like their bitwise counterparts. We already looked at the ! operator, which turns a true value into a false and false values into true ones. Thus it is actually as though we converted the entire expression into a single bit and reversed that bit.
We also have && and ||, &&
works by evaluating the left operand, and if and only if that returns
a true value the right operand is evaluated and that result is
returned. || works similar, but only evaluates the left
operand if the right operand was false.
Thus unlike their bitwise cousins, the ordering of operands is
important here, just think of $a && $b/$a if $a is
In the start I wrote that you should not confuse the and operator with the & operator, this is because and is the same as &&. Same goes for or which is the same as || and also ! which is not. The difference is that the named versions have lower precedens, which means that the implicit parentheses are put differently when Perl is looking at your code.
ConclusionI now hope you have a better understanding of how bitwise operations work, and are able to understand why people sometimes do things like "onestring" ^ "anotherstring".
Here is a small tip, if you ever find yourself with a long complex |
Learn all about how low-temperature physics affects our daily lives – from refrigeration to cell phones. You’ll also find great demonstrations, and thought-provoking questions for further discussion. Here are the different topics:
Topic 1: Measuring the Cold - Thermometers
Topic 2: Understanding Heat and Energy
Topic 3: States of Matter
Topic 4: Refrigeration
Topic 5: Cryogenics
Topic 6: The Quest for Absolute Zero
Topic 7: How Animals Survive the Cold
Topic 8: Superconductivity
Topic 9: Astronomy
Topic 10: Spaceflight
Topic 11: Agriculture
Topic 12: Cold Medicine
The Fountain Effect
John F. Allen and Harry Jones discovered the "fountain effect", in which superfluid Helium flows up a tube and shoots into the air upon being exposed to a small heat source (the heat source in the original experiment was a flashlight that they were using to look at the apparatus). Allen also used a movie camera to film his experiments, such as the superfluid helium fountain (seen here). His was an early use of moving images to document experiments and inform students and the general public.
Thank you to our Underwriters: National Science Foundation, Alfred P. Sloan Foundation.
Credits: 2006 - Design and Development: Devillier Communications and Wood St. Content - Devillier Communcations. All Rights Reserved. |
Spherical trigonometry is the branch of spherical geometry that deals with the relationships between trigonometric functions of the sides and angles of the spherical polygons (especially spherical triangles) defined by a number of intersecting great circles on the sphere. Spherical trigonometry is of great importance for calculations in astronomy, geodesy and navigation.
The origins of spherical trigonometry in Greek mathematics and the major developments in Islamic mathematics are discussed fully in History of trigonometry and Mathematics in medieval Islam. The subject came to fruition in Early Modern times with important developments by John Napier, Delambre and others, and attained an essentially complete form by the end of the nineteenth century with the publication of Todhunter's text book Spherical trigonometry for the use of colleges and Schools. This book is now readily available on the web. The only significant developments since then have been the application of vector methods for the derivation of the theorems and the use of computers to carry through lengthy calculations.
- 1 Preliminaries
- 2 Cosine rules and sine rules
- 3 Identities
- 4 Solution of triangles
- 5 Area and spherical excess
- 6 See also
- 7 References
- 8 External links
A spherical polygon on the surface of the sphere is defined by a number of great circle arcs that are the intersection of the surface with planes through the centre of the sphere. Such polygons may have any number of sides. Two planes define a lune, also called a "digon" or bi-angle, the two-sided analogue of the triangle: a familiar example is the curved surface of a segment of an orange. Three planes define a spherical triangle, the principal subject of this article. Four planes define a spherical quadrilateral: such a figure, and higher sided polygons, can always be treated as a number of spherical triangles.
From this point the article will be restricted to spherical triangles, denoted simply as triangles.
- Both vertices and angles at the vertices are denoted by the same upper case letters A, B and C.
- The angles A, B, C of the triangle are equal to the angles between the planes that intersect the surface of the sphere or, equivalently, the angles between the tangent vectors of the great circle arcs where they meet at the vertices. Angles are in radians. The angles of proper spherical triangles are (by convention) less than π so that π < A + B + C < 3π. (Todhunter, Art.22,32).
- The sides are denoted by lower-case letters a, b, c. On the unit sphere their lengths are numerically equal to the radian measure of the angles that the great circle arcs subtend at the centre. The sides of proper spherical triangles are (by convention) less than π so that 0 < a + b + c < 3π. (Todhunter, Art.22,32).
- The radius of the sphere is taken as unity. For specific practical problems on a sphere of radius R the measured lengths of the sides must be divided by R before using the identities given below. Likewise, after a calculation on the unit sphere the sides a, b, c must be multiplied by R.
The polar triangle associated with a triangle ABC is defined as follows. Consider the great circle that contains the side BC. This great circle is defined by the intersection of a diametral plane with the surface. Draw the normal to that plane at the centre: it intersects the surface at two points and the point that is on the same side of the plane as A is (conventionally) termed the pole of A and it is denoted by A'. The points B' and C' are defined similarly.
The triangle A'B'C' is the polar triangle corresponding to triangle ABC. A very important theorem (Todhunter, Art.27) proves that the angles and sides of the polar triangle are given by
Therefore, if any identity is proved for the triangle ABC then we can immediately derive a second identity by applying the first identity to the polar triangle by making the above substitutions. This is how the supplemental cosine equations are derived from the cosine equations. Similarly, the identities for a quadrantal triangle can be derived from those for a right-angled triangle. The polar triangle of a polar triangle is the original triangle.
Cosine rules and sine rules
The cosine rule is the fundamental identity of spherical trigonometry: all other identities, including the sine rule, may be derived from the cosine rule.
These identities reduce to the cosine rule of plane trigonometry in the limit of sides much smaller than the radius of the sphere. (On the unit sphere a, b, c<<1: set and etc; see Spherical law of cosines.)
These identities reduce to the sine rule of plane trigonometry in the limit of small sides.
Derivation of the cosine rule
The spherical cosine formulae were originally proved by elementary geometry and the planar cosine rule (Todhunter, Art.37). He also gives a derivation using simple coordinate geometry and the planar cosine rule (Art.60). The approach outlined here uses simpler vector methods. (These methods are also discussed at Spherical law of cosines.)
Consider three unit vectors OA, OB and OC drawn from the origin to the vertices of the triangle (on the unit sphere). The arc BC subtends an angle of magnitude a at the centre and therefore OB·OC=cos a. Introduce a Cartesian basis with OA along the z-axis and OB in the xz-plane making an angle c with the z-axis. The vector OC projects to ON in the xy-plane and the angle between ON and the x-axis is A. Therefore the three vectors have components:
- OA OB OC .
The scalar product OB·OC in terms of the components is
- OB·OC = .
Equating the two expressions for the scalar product gives
This equation can be re-arranged to give explicit expressions for the angle in terms of the sides:
The other cosine rules are obtained by cyclic permutations.
Derivation of the sine rule
This derivation is given in Todhunter, (Art.40). From the identity and the explicit expression for given immediately above
Since the right hand side is invariant under a cyclic permutation of the spherical sine rule follows immediately.
Supplemental cosine rules
Applying the cosine rules to the polar triangle gives (Todhunter, Art.47), i.e. replacing A by π–a, a by π–A etc.,
Cotangent four-part formulae
The six parts of a triangle may be written in cyclic order as (aCbAcB). The cotangent, or four-part, formulae relate two sides and two angles forming four consecutive parts around the triangle, for example (aCbA) or (BaCb). In such a set there are inner and outer parts: for example in the set (BaCb) the inner angle is C, the inner side is a, the outer angle is B, the outer side is b. The cotangent rule may be written as (Todhunter, Art.48)
and the six possible equations are (with the relevant set shown at right):
To prove the first formula start from the first cosine rule and on the right-hand side substitute for from the third cosine rule:
The result follows on dividing by . Similar techniques with the other two cosine rules give CT3 and CT5. The other three equations follow by applying rules 1, 3 and 5 to the polar triangle.
Half-angle and half-side formulae
With and ,
Another twelve identities follow by cyclic permutation.
The proof (Todhunter, Art.49) of the first formula starts from the identity 2sin2(A/2) = 1–cosA, using the cosine rule to express A in terms of the sides and replacing the sum of two cosines by a product. (See sum-to-product identities.) The second formula starts from the identity 2cos2(A/2) = 1+cosA, the third is a quotient and the remainder follow by applying the results to the polar triangle.
Delambre (or Gauss) analogies
Another eight identities follow by cyclic permutation.
Another eight identities follow by cyclic permutation.
These identities follow by division of the Delambre formulae. (Todhunter, Art.52)
Napier's rules for right spherical triangles
When one of the angles, say C, of a spherical triangle is equal to π/2 the various identities given above are considerably simplified. There are ten identities relating three elements chosen from the set a, b, c, A, B.
Napier provided an elegant mnemonic aid for the ten independent equations: the mnemonic is called Napier's circle or Napier's pentagon (when the circle in the above figure, right, is replaced by a pentagon).
First write in a circle the six parts of the triangle (three vertex angles, three arc angles for the sides): for the triangle shown above left this gives aCbAcB. Next replace the parts that are not adjacent to C (that is A, c, B) by their complements and then delete the angle C from the list. The remaining parts are as shown in the above figure (right). For any choice of three contiguous parts, one (the middle part) will be adjacent to two parts and opposite the other two parts. The ten Napier's Rules are given by
- sine of the middle part = the product of the tangents of the adjacent parts
- sine of the middle part = the product of the cosines of the opposite parts
For an example, starting with the sector containing we have:
The full set of rules for the right spherical triangle is (Todhunter, Art.62)
Napier's rules for quadrantal triangles
When one of the sides, say c, of a spherical triangle is equal to π/2 the corresponding equations are obtained by applying the above rules to the polar triangle A'B'C' with sides a',b',c' such that A' = π–a, a' = π–A etc. This gives the following equations:
Substituting the second cosine rule into the first and simplifying gives:
Cancelling the factor of gives
Similar substitutions in the other cosine and supplementary cosine formulae give a large variety of 5-part rules. They are rarely used.
Solution of triangles
Main article: Solution of triangles § Solving spherical triangles
The solution of triangles is the principal purpose of spherical trigonometry: given three, four or five elements of the triangle determine the remainder. The case of five given elements is trivial, requiring only a single application of the sine rule. For four given elements there is one non-trivial case, which is discussed below. For three given elements there are six cases: three sides, two sides and an included or opposite angle, two angles and an included or opposite side, or three angles. (The last case has no analogue in planar trigonometry.) No single method solves all cases. The figure below shows the seven non-trivial cases: in each case the given sides are marked with a cross-bar and the given angles with an arc. (The given elements are also listed below the triangle). There is a full discussion of the solution of oblique triangles in Todhunter (ChapterVI).
- Case 1: three sides given. The cosine rule gives A, B, and C.
- Case 2: two sides and an included angle given. The cosine rule gives a and then we are back to Case 1.
- Case 3: two sides and an opposite angle given. The sine rule gives C and then we have Case 7. There are either one or two solutions.
- Case 4: two angles and an included side given. The four-part cotangent formulae for sets (cBaC) and (BaCb) give c and b, then A follows from the sine rule.
- Case 5: two angles and an opposite side given. The sine rule gives b and then we have Case 7 (rotated). There are either one or two solutions.
- Case 6: three angles given. The supplemental cosine rule gives a, b, and c.
- Case 7: two angles and sides as shown. Use Napier's analogies for a and A.
The solution methods listed here are not the only possible choices: many others are possible. In general it is better to choose methods that avoid taking an inverse sine because of the possible ambiguity between an angle and its supplement. The use of half-angle formulae is often advisable because half-angles will be less than π/2 and therefore free from ambiguity. There is a full discussion in Todhunter. The solution of triangles article presents variants on these methods with a slightly different notation.
Solution by right-angled triangles
Another approach is to split the triangle into two right-angled triangles. For example take the Case 3 example where b, c, B are given. Construct the great circle from A that is normal to the side BC at the point D. Use Napier's rules to solve the triangle ABD: use c and B to find the sides AD, BD and the angle BAD. Then use Napier's rules to solve the triangle ACD: that is use AD and b to find the side DC and the angles C and DAC. The angle A and side a follow by addition.
Not all of the rules obtained are numerically robust in extreme examples, for example when an angle approaches zero or π. Problems and solutions may have to be examined carefully, particularly when writing code to solve an arbitrary triangle.
Area and spherical excess
Consider an n-sided spherical polygons as well as spherical triangles. Let Σ denote the sum of the interior angles of such a polygon on the unit sphere. Then the area of the polygon is given by (Todhunter, Art.99)
For the case of triangle
where E is the amount by which the sum of the angles exceeds π radians. The quantity E is called the spherical excess. This theorem is named after its author (for the circle) Albert Girard. An earlier proof was derived, but not published, by the English mathematician Thomas Harriot. On a sphere of radius R both of the above area expressions are multiplied by R2. The definition of the excess is independent of the radius of the sphere
The converse result may be written as
Since the area of a triangle cannot be negative the spherical excess is always positive. Note that it is not necessarily small since the sum of the angles may attain 3π. For example, an octant of a sphere is a spherical triangle with three right angles, so that the excess is π/2. In practical applications it is often small: for example the triangles of geodetic survey typically have a spherical excess much less than 1' of arc. (Rapp Clarke, Legendre's theorem on spherical triangles). On the Earth the excess of an equilateral triangle with sides 21.3 km (and area 393 km2) is approximately 1 arc second.
There are many formulae for the excess. For example Todhunter, (Art.101—103) gives ten examples including that of L'Huilier:
where . Because some triangles are badly characterized by their edges (e.g., if ), it is often better to use the formula for the excess in terms of two edges and their included angle
An example for a spherical quadrangle bounded by a segment of a great circle, two meridians, and the equator is
where denote latitude and longitude. This result is obtained from one of Napier's analogies. In the limit where are all small, this reduces to the familiar trapezoidal area, .
Angle deficit is defined similarly for hyperbolic geometry.
- Air navigation
- Spherical geometry
- Spherical distance
- Schwarz triangle
- Spherical polyhedron
- Celestial navigation
- Lenart sphere
- Todhunter, I. (1886). Spherical Trigonometry (5th ed.). MacMillan. This fifth edition is the cleanest available free version on the web The Gutenberg sources also include a latex version of the text. The latest (posthumous) and most complete version was published in 1911, co-authored with J. G. Leathem. The third edition has been issued by Amazon in paperback and Kindle versions . The text has been typeset but the formulae and diagrams have been pasted in as somewhat unsatisfactory images.
- Delambre, J. B. J. (1807). Connaissance des Tems 1809. p. 445.
- Napier, J (1614). Mirifici Logarithmorum Canonis Constructio. p. 50. An 1889 translation The Construction of the Wonderful Canon of Logarithms is available as en e-book from Abe Books
- Another proof of Girard's theorem may be found at .
- Rapp, Richard. H (1991). Geometric Geodesy Part I. p. 89. (pdf page 99),
- Clarke, Alexander Ross (1880). Geodesy. Clarendon Press. (Chapters 2 and 9). Recently republished at Forgotten Books
- Wolfram's mathworld: Spherical Trigonometry a more thorough list of identities, with some derivation
- Wolfram's mathworld: Spherical Triangle nice applet
- TriSph A free software to solve the spherical triangles, configurable to different practical applications and configured for gnomonic
- A Visual Proof of Girard's Theorem by Okay Arik, the Wolfram Demonstrations Project.
- "The Book of Instruction on Deviant Planes and Simple Planes" is a manuscript in Arabic that dates back to 1740 and talks about spherical trigonometry, with diagrams.
- Some Algorithms for Polygons on a Sphere Robert G. Chamberlain, William H. Duquette, Jet Propulsion Laboratory. The paper develops and explains many useful formulae, perhaps with a focus on navigation and cartography.
- Online computation of spherical triangles |
What Are Canine Whipworms?
Whipworms are parasites which live in a dog's large intestine. They can cause diarrhea, weight loss and anemia.
The whipworm gets its name from the adult's characteristic whip-like shape, with the front end narrower than the back end.
Unlike many other internal parasites, whipworms, which belong to a large group of parasites called Trichuris, display a relatively high degree of host specificity, with canine whipworms (Trichuris vulpis) very rarely occurring in other species, so it is not a risk to human health. Cats are not affected by whipworms, either.
"A fertile female worm can lay about 2000 eggs a day, although the shedding of eggs
is not consistent on a day to
The canine whipworm lives in the large intestine of dogs, causing severe irritation to the linings of those organs. Adult worms are three inches or less in length with males significantly smaller than the egg-bearing females. Adult whipworms embed their thin head end deeply into the intestinal lining and feed on tissue secretions, not blood. However, they are greedy and wasteful feeders and there is often a leakage of blood as a result of their feeding and burrowing activities.
Infection of a dog occurs only when it accidentally ingests whipworm eggs containing infective larvae or juvenile worms. The eggs are thick walled and very resistant to drying and heat, so they can remain viable in the environment for years, even through the heat of summer.
When swallowed, the eggs hatch in the small intestine and the larvae migrate into the gut wall for two to 10 days. They then migrate back into the gut and pass down with the passage of food into the large intestine and mature into adult worms.
It takes at least 10 weeks from the time of infection until there will be any worm eggs, a length of time known as the pre-patent period of whipworm. Due to this long pre-patent period, whipworm infection is often not recognized in young pups, so treatments for pups are not directed against whipworms, only roundworms and hookworms.
The life span of the canine whipworm ranges from three to 18 months. Because whipworms live for an extended period of time, an untreated dog can be constantly re-infected by new eggs in feces. A fertile female worm can lay about 2000 eggs a day, although the shedding of eggs is not consistent on a day to day basis.
Veterinarians will attempt to diagnose the disease by looking for worm eggs in stool samples. It is not uncommon to find no eggs in the feces, despite a dog being infected with whipworm. For this reason, many veterinarians will simply worm the dog with a product effective against whipworm if they suspect this may be a problem.
Compared to many other parasites, whipworms cause less harm to their hosts. Adult worms burrowing into the large intestinal wall cause inflammation and bleeding from the gut. Occasionally, heavy infestations occur and affected dogs may show signs of recurrent bouts of abdominal pain and diarrhea with stools that have blood or mucus on them. Young dogs, or dogs with chronic infection, can suffer weight loss, dehydration and anemia due to the blood loss.
|Pet Shed's most popular solutions for ridding your pet of whipworms|
Treating whipworm infection can be a difficult task since re-infection from contaminated environments is such a problem. Eggs, because of their thick shell, are very resistant. They can remain in the environment for up to five years and are resistant to most cleaning methods and to freezing, but they can be dried out with strong drying agents such as agricultural lime. However, it is extremely difficult to eliminate eggs in soil, so the preferred alternative is to replace the contaminated soil with new soil, gravel and pavement.
There are several ingredients on the market which are effective in killing whipworms such as febantel-pyrantel combination, milbemycin oxime and others. Dogs should be treated for whipworm at least every three months throughout life. Many wormers will also treat other kinds of worms. Your veterinarian can advise you on the most appropriate product for your pet.
|Kassai T, (1998). Veterinary Helminthology. Butterworth Heinemann, UK.
Payne P.A., Carter G.R. Internal Parasites of Dogs and Cats. In: A Concise Guide to Infectious and Parasitic Disease of Dogs and Cats. International Veterinary Information Service, Ithaca, NY. www.ivis.org
Tilley, L.P., Smith, F.W.K. The Five Minute Veterinary Consult Canine and Feline. Second Edition. Lippincott Williams & Wilkins, Baltimore, 2000. |
Humidity is the amount of water vapor in the air. Water vapor is the gaseous state of water and is invisible. Humidity indicates the likelihood of precipitation, dew, or fog. Higher humidity reduces the effectiveness of sweating in cooling the body by reducing the rate of evaporation of moisture from the skin.
DHT11 Humidity Temperature Sensor
The DHT11 humidity and temperature sensor measures relative humidity (RH) and temperature. Relative humidity is the ratio of water vapor in air vs. the saturation point of water vapor in air. The saturation point of water vapor in air changes with temperature. Cold air can hold less water vapor before it is saturated, and hot air can hold more water vapor before it is saturated. The formula for relative humidity is as follows:
Relative Humidity = (density of water vapor / density of water vapor at saturation) x 100%
Basically, relative humidity is the amount of water in the air compared to the amount of water that air can hold before condensation occurs. It’s expressed as a percentage. For example, at 100% RH condensation (or rain) occurs, and at 0% RH, the air is completely dry.
- Arduino Uno
- Connecting Wires
DHT11 Sensor Pinout
DHT11 Arduino Circuit Diagram
Arduino Code for DHT11 Humidity Temperature Measurement
This code requires DHT11 Library Download From: https://github.com/adafruit/DHT-sensor-library
//Humidity and Temperature Measurement
#define DHT11_PIN 2
int chk = DHT.read11(DHT11_PIN);
Serial.print("Temperature = ");
Serial.print("Humidity = ");
DHT11 Humidity Temperature Measurement Result
Open serial monitor and observe the readings. Blowing air on sensor will change the humidity readings.
For more Weather related other parameters measurement see below links |
This August 24 and 25 mark the 200th anniversary of the British burning of Washington DC during the War of 1812.
Prior to the burning, 4,500 British soldiers went up against 5,000 Americans (mostly militiamen) in a battle at Bladensburg, Maryland, just 4 miles northeast of Washington. Though the Americans had the advantage of numbers and artillery, the untried and poorly led militiamen didn’t stand much of a chance against the better trained and disciplined British soldiers. Three hours of battle had the Americans fleeing as fast as they could, while the British commanding officers, General Ross and Admiral Cockburn, led a portion of their men into Washington, which was now undefended.
Leaving private homes and property alone for the most part, the British began burning government buildings, starting with Capitol building, which at the time also housed the Supreme Court and Library of Congress. They then proceeded to the White House, which had been abandoned by President Madison and his wife shortly before. (Mrs. Madison is famous for staying at the White House as long as possible and directing the rescue of a portrait of George Washington, among other valuables.)
The following day, Cockburn and Ross organized the burning of other buildings, like the State and War departments and the Treasury, which had started to burn the night before but had been doused by a rainstorm. Cockburn ordered the destruction of the printing presses of a newspaper that had been particularly critical of him, but the U.S. Patent Office was saved from destruction by the pleas of its superintendent. The British went to the Navy Yard, but it had already been burned the previous day by the Americans to keep it from falling into British hands. A contingent of soldiers also went to Greenleaf Point Federal Arsenal to destroy the gunpowder and cannons there but ended up causing an explosion that killed or maimed many of them.
Later that day, a huge storm blew in that wreaked havoc on the city, downing trees and ripping roofs off buildings. After the storm had died down somewhat, the British officers ordered a retreat of their men during the night, before the American forces could regroup.
Discover more about the burning of Washington DC, and other events and people of the war, in Fold3′s War of 1812 collection. |
Advice and Information
Bullying can take many different forms such as name calling, cyber bullying through technology such as texting and online instant messages and even places like Facebook and Twitter. Bullying can also be physical as well as verbal.
Bullying can make you feel lonely, depressed, anxious and isolated. It’s important to tell someone you trust.
I’m being bullied, what can I do?
It is important that you tell someone you trust. Just talking about it can help you to recognise a solution to the problem. A teacher, parent, GP, friends, family, or your young carer support worker can talk through some of the issues you might be facing because of bullying.
If bullying takes the form of cyber bullying it is important that you keep copies of the text messages and emails etc. that are being sent to you, that way when you do tell someone you can trust such as a teacher, they can see what the bullies are doing and can act on it accordingly.
Using websites such as Child Line can offer good support and guidance on bullying.
- Exam Stress
- Leaving School
- Drugs and Alcohol
- Recipes (include the cook book that has previously been produced by YC)
- Crime and the Law (what you can do at what age?)
- Bereavement & Loss |
LA: Read and comprehend literature, including stories, dramas, and poetry, in the grade level text complexity band proficiently, with scaffolding as needed at the high end of the range.
LA: Write narratives to develop real or imagined experiences or events using effective technique, descriptive details, and clear event sequences.
LA: With guidance and support from peers and adults, develop and strengthen writing as needed by planning, revising, and editing.
LA: Report on a topic or text, tell a story, or recount an experience in an organized manner, using appropriate facts and relevant, descriptive details to support main ideas or themes; speak clearly at an understandable pace.
VA: Select and use the qualities of structures and functions of art to improve communication of ideas. |
The story is all too familiar: A middle school student is tripped while walking down the aisle of a school bus, and the entire busload of children erupts in laughter. In the ensuing days and weeks, the same young student is shoved in the stairwell, harassed in the lunch room, and ridiculed online. Classmates are vicious and unyielding in their attacks, often recruiting others to join in the torment, and targeting anyone who attempts to thwart their assault. The victim becomes withdrawn, anxious, and depressed, often avoiding social interaction. Grades often plummet. In some cases the victim may lash out, seeking retribution against the bullies or even bullying other innocent students in an attempt to regain some social control and status. In the worst cases, the victim may become so despondent that the aggression turns inward and results in suicide.
It’s not terribly surprising that scientific research confirms the widespread costs experienced by people who are bullied. The initial experience of social exclusion appears to be much like that of physical pain, as the same brain region (an area known as the anterior cingulate cortex) is activated when people experience social ostracism and physical pain; moreover, the level of brain activation during an ostracizing experience correlates with self-reported feelings of distress. Other studies have demonstrated that participants who are excluded from a social conversation or an interactive game for only a few minutes experience heightened sadness and anger, as well reductions in self-esteem and feelings of control. The distress associated with exclusion is significant even when people know that the ostracizing players are members of a despised outgroup (e.g., KKK), or simply computer simulations. Indeed, just watching someone else get ignored is enough to put us in a bad mood.
What is surprising, however, is the recent finding that social exclusion hurts the perpetrators as well as the victims. Bullying, it seems, cuts both ways. The consequences of isolating or ostracizing another person may include heightened feelings of anger, shame, and guilt, as well as a sense of social disconnection. In a series of studies by Nicole Legate and colleagues, for example, individuals who complied with instructions to shun others suffered socially and emotionally as a result of the experience. For these studies, participants engaged in Cyberball, a computerized ball-tossing game in which participants believed they were engaging with two other players. In reality, however, the two other players were computers programmed to respond in specific ways. Despite the fact that participants never met the other "players" in person, they nonetheless suffered when they intentionally excluded one of those players from the game.
In one study, some participants were directly instructed to exclude another player from the game (ostracizing condition), while other participants were given no such instructions (neutral condition). The experimenters hypothesized that participants who engaged in willful social exclusion would experience diminished autonomy, a reduced connection with other players, and an increase in negative affect. To be sure that these consequences resulted from engaging in ostracism, per se, and not merely from following directive play instructions, a third group of participants was told to throw the ball equally to all players (inclusion condition). Data showed that players in both the ostracizing and the inclusion conditions reported reduced autonomy relative to players in the neutral condition, but the ostracizing condition experienced the most severe autonomy reduction. In addition, only players in the ostracizing condition reported an increase in negative affect and a degraded sense of connection with others.
In a second study, participants were again tested in ostracizing and neutral conditions, as well as in an ostracized condition (here, participants were intentionally left out of the game). Those who actively shunned others felt more guilt, shame, and anger than those in the neutral or even the ostracized condition. They were also the only group to report diminished autonomy.
Of course these studies examined immediate or short-term effects of social exclusion, and may not be reflective of long-term consequences for victims or bullies. Unfortunately, emerging longitudinal work out of Duke University suggests that the repercussions of bullying may persist long after the event, even into adulthood. In a sample of over 1200 children and adolescents, for example, roughly 25 percent reported being bullied at least once before the age of 16, and those who were bullied had higher levels of anxiety disorders as young adults. A number of other studies indicate that children who are ostracized may in turn become aggressive toward others, and in the Duke study 20 percent of those who were bullied were also aggressors. Those who were both victims and bullies experienced the most significant long-term consequences, with the highest rates of depressive disorders, generalized anxiety, panic disorder, and suicidality.
It seems then that bullies may have as much to lose as their victims. The good news is that in recent years a number anti-bullying campaigns have emerged, including school programs, support websites, and social media efforts (e.g., Not in Our School, Love is Louder, It's My Life, Stop Bullying). In 2011, Lee Hirsh produced the documentary Bully, which highlighted five different cases of abusive, destructive bullying and spawned The Bully Project, an initiative lauded by mainstream media and endorsed by numerous celebrities, including Katie Couric, Martha Stewart, Naya Rivera, Cory Monteith and others. Even President Obama has joined the fray, supporting public policy and legislation aimed at extinguishing bullying in schools. If these anti-bullying initiatives and policies prove effective, maybe, just this once, everybody wins.
Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist at the Boston Globe. He can be reached at garethideas AT gmail.com or Twitter @garethideas. |
or Build Your Own Stonehenge
Sunrise and sunset positions move around the horizon each year. They go from a northernmost position on the summer solstice, move south to cross an east-west line on the equinoxes and continue south to a southernmost position at winter solstice. The solstice sunrises can be used to anchor a calendar to the seasons of the year. The angle between the solstice sunrises make with the east-west equinox sunrise line depends on the latitude of the observer. Thus the geometry of an observatory with sightlines that point to these sunrises will be different at different latitudes. The rising points of the full moon make a more complicated pattern than that of the sun.
Sunrise sunset swiftly flow the days
To Do and Notice
Observe sunrises and sunset during the year. Notice that in the winter the sun rises to the south of an east-west line and in the summer the sun rises to the north of an east-west line.
On the equinoxes the sun rises due east and sets due west. Equinoxes are within a day of March 21 and September 21.
The most northerly sunrise occurs on the winter
solstice, about December 21,
The most southerly sunrise occurs on the summer solstice, about June 21.
Perhaps some of the streets in your neighborhood are aligned in an east-west direction.
What's Going On?
The axis of rotation of the earth is tipped 23.5 degrees with respect to a perpendicular line to its orbital plane.
Use a glowing 40 watt lightbulb to represent the sun and an earth globe with its axis tipped by 23.5 degrees to model the earth's motion around the sun each year and its daily rotation.
At all latitudes on the equinoxes the sun rises
due east. It sets due west.
(The earth's axis will be tipped in a plane perpendicular to the sun-earth line.)
At the equator on the winter solstice the sun rises and sets 23.5 degrees south of the east-west line. On the summer solstice it sets 23.5 degrees north of the east-west line.
At other latitudes the angle between east and the position of sunrise is greater than 23.5 degrees. For example at San Francisco with a latitude near 38 °N the sun rises 30 degrees south of east on the winter solstice.
There is an equation to calculate, D, the angle from east to the solstice sunrise.
You can click Here to go to a derivation of this equation using paper plates.
If the angle of inclination of the earth's axis is i = 23.5° and the latitude is L, then
Check this out, at the equator L = 0 and cos(0) = 1 so the angle from the solstice sunrise to east is 23.5 degrees.
At the pole L = 90 and cos (90) = 0 so the equation is undefined as it should b.
Because the sun does not rise and set on the summer or winter solstices at either pole.
Click here if you would like to see a paper plate cut-out derivation of this equation.
To Do and Notice
Find the latitude of your school, or of some ancient culture.
For example a school in San Francisco has a latitude of L = 38 degrees.
Find the angle between east and sunrise on the summer and winter solstice.
From the above equation, we find that at L = 38, D
= 30 degrees from east,
and the angle between summer and winter solstice sunrises is 60 degrees.
Design a structure with angles incorporated into the structure that point to solstice sunrises and sunsets in your chosen location
For example at Latitude 56 degrees the sun rises 45 degrees north of east on summer solstice and 45 degrees south of east on winter solstice, this means that two lines pointing toward the solstices make a 90 degree angle at this latitude. A square structure could have a door in one wall facing the winter solstice sunrise and a window on an adjacent wall facing the summer solstice sunrise.
The teachers in San Francisco designed several structures. One was a hexagon where one face points toward equinox sunrises, one toward summer solstice sunrise, one toward winter solstice sunrise, one toward summer solstice sunset, and the last toward winter solstice sunset. Another teacher made a star of David.
All the major moons in the solar system orbit above the equators of their parent planet except for the earth's moon.
When you look at the night sky, the sun, the planets, and the moon move across the sky in a path known as the ecliptic. The ecliptic is the plane of the earth's orbit around the sun.
The full moon is on the opposite side of the earth from the sun.
Taken together the above two observations mean that in the winter when the sun is low in the sky at noon, the full moon is high in the sky at midnight.
The earth's moon does not orbit exactly in the plane of the ecliptic but is in a plane tilted 5.1 degrees from the ecliptic.
The direction of the tilt of the moon's orbital plane goes through a complete cycle every 18.6 years. This means that the full moon rise position for the full moon nearest the winter solstice can be 5.1 degrees further from east than the summer solstice sunrise position or 5.1 degrees less from east than the summer solstice sunrise position. These maximum positions of the rising moon are called "Lunar Standstills."
Viewed from the equator the maximum angle from east of the lunar standstill is 28.6°.
In San Francisco where the solstice sunrises are 30 degrees north and south of east. This means that the full moon will rise on the winter solstice 36 degrees south of east once in every 18.6 years. It will rise 24 degrees south of east once every 18.6 year cycle.
Choose a culture or a latitude and design a structure that has walls oriented in directions that point to the positions of the sun or the moon at solstice or equinox.
So What? Stonehenge
For example at the latitude of Stonehenge, 51°N the winter solstice sun rises 40 degrees north of east. The summer full moon rise nearest solstice will be 50 degrees north of east. Similarly the winter full moon rise nearest solstice can be 50 degrees south of north. A structure built to point at the winter solstice sunrise and the summer moon stand still would have two directions differing by 90 degrees! It has been suggested that Stonehenge was built at the precise latitude to allow a rectangular construction.
Earthworks of the Hopewell Culture
The culture known as the mound builders of Ohio built many giant earthworks between 200 BC and 500 AD. Two of these earthworks were octagons connected to a circle. One of these is shown in the 1848 map below. The circle is 3000 feet in diameter, about a kilometer across.
A line from the center of the circle through the center of the octagon of the structure points 52 degrees east of north. The latitude of the site is 40.0 degrees. Can you find any solar or lunar alignments?
Reading from the Ohio Historical Society
Click here to see other archeoastronomy sites with alignments and their latitudes:
To calculate the angle, D, between the east-west line and solstice sunrise or sunset at latitude L use the equation
Sin(D) = Sin(i)/Cos(L)
sin (i) = sin (23.5) = 0.399
use the table below,
or use google's calculator
(for example for L = 45 type "arcsin(sin(23.5 degrees)/cos(45 degrees)) in degrees")
L Angle degrees Cosine (L) arcsin 5 0.996 23.6 degrees 10 0.985 23.9 15 0.966 24.4 20 0.940 25.1 25 0.906 26.1 30 0.866 27.4 35 0.819 29.2 40 0.766 31.4 45 0.707 34.4 50 0.643 38.4 55 0.574 44.0 60 0.5 52.9 65 0.423 70.6
(Sin(i) / Cos(L))
L Angle degrees
Lunar Standstill Calculator
To calculate the angle, M, between the east-west line and lunar standstill moonrise or moonset at a latitude L use the equation
Sin(M) = Sin(A)/Cos(L)
A = i + 5.1° and sin (A) = sin (28.6) = 0.477
L Angle degrees Cosine (L) arcsin 5 0.996 28.7 degrees 10 0.985 29.1 15 0.966 29.7 20 0.940 30.6 25 0.906 31.9 30 0.866 33.6 35 0.819 35.8 40 0.766 38.7 45 0.707 42.6 50 0.643 48.1 55 0.574 56.6 60 0.5 73.2 65 0.423
(Sin(A) / Cos(L))
L Angle degrees
You can use the following polar plot to cut out the angles for your chosen latitude.
Website on sunrise azimuth
For Stonehenge, the original book Stonehenge Decoded, is still the best reference.
Scientific Explorations with Paul Doherty
14 May 2005 |
Ten years have passed since the United Nations Framework Convention on Climate Change (UNFCCC) entered into force on 21 March 1994. It is thus most appropriate to review what has happened since then in what is an enormously complex field. One thing has become very clear, namely that climate change touches upon virtually every sphere of life, and almost every human activity either contributes to climate change or is affected by its impacts.
Certain impacts of climate change may already be observed, and much more is expected if the rise in greenhouse gas concentrations cannot be slowed down. The growing number of extreme weather events in recent years is one example of the kind of impacts that may be in store.
Worldwide economic losses due to natural disasters increased from about US$40 billion per year in the 1990s to US$60 billion in 2003, according to estimates by the United Nations Environment Programme (UNEP) Finance Initiative. Numerous weather and climate-related disasters occurred in 2003, some unprecedented in intensity. Thousands of people died in Europe and North America as a result of the impacts of heat waves, and significant damage was caused by widespread forest fires. In Korea, a typhoon caused over 100 deaths, 25,000 homeless and an estimated US$4.1 million in property damage. Developing countries were seriously hit. In Pakistan, floods killed 162 people, displaced 900,000 and destroyed nearly 48,000 homes. Drought affected the livelihood of 23 million people in eastern and southern Africa.
In general, it is developing countries that are most vulnerable. They rely heavily on climate-sensitive sectors, such as agriculture and forestry, and their lack of resources, infrastructure and health systems leaves them at greater risk to the adverse impacts of climate change. Particularly at risk are low-lying areas and deltas, large coastal cities, squatter camps located on flood plains and on steep hillsides, settlements in forested areas where seasonal wildfires may increase, as well as those stressed by population growth, poverty and environmental degradation. Helping countries to adapt to climate change has become a key component of overall climate change policy, but much remains to be done to implement it, in such areas as infrastructure development and land management.
One of the main goals of the Convention was to demonstrate that in about 10 years developed countries could return their emissions to 1990 levels. Indeed, industrialised countries, referred to as Annex I Parties, have cut greenhouse gas emissions by almost 7% between 1990 and 2001. But this is primarily due to a 40% decline in emissions in countries whose economies are in transition. Greenhouse gas emissions in the highly industrialised countries (Annex II Parties) increased by about 7.5% during that period (see graph above).
Many countries will have to do a lot more to get their emissions down. Under the Kyoto Protocol, reduction targets will force some Annex I Parties to put in place stringent measures to cut carbon dioxide emissions. While the overall emission reduction of 5% under the Protocol might not appear to be very ambitious, the instrument, even though it is not yet in force, has already set in motion the vital process of decoupling the pace of increase in CO2 emissions from economic growth.
Carbon intensity, which describes the relationship of carbon emissions and world economic output, has decreased continuously since the industrial revolution and this trend accelerated in the 1990s. But there seems to be a level towards which different carbon intensities globally converge. The world has been moving together in the last 30 years with, for instance, China having brought its energy intensity down to US levels, at the same time as the US has approached levels of developing and European countries. The challenge is to move this level of convergence further down, and the efforts that countries make to meet their Kyoto commitments are an important step in that direction.
But we all know that the Kyoto Protocol, important as it is, is only a first step in meeting the long-term objective of the Convention “to achieve […] stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. […]” (Article 2 of the United Nations Framework Convention on Climate Change).
I must admit to being surprised at some experts and leaders – including at the OECD – who argue that we should focus more on adaptation, because the Kyoto Protocol would not solve the climate change problem. Yet, no one has ever claimed that the Kyoto Protocol would achieve that.
It is important, though, that research and development focus on technologies that will bring down the carbon-intensity of the economy and contribute to reducing greenhouse gas concentrations in the atmosphere to levels that can be considered “safe”. These efforts are already being pursued with vigour, alongside the implementation of the Kyoto Protocol provisions.
Renewable energy sources, hydrogen-based fuel and carbon sequestration are today’s catchwords. But we should not forget that major policy decisions will be needed to allow these and possibly other breakthrough technologies to penetrate the market. Today’s experience shows that this remains a challenge for many governments.
The Kyoto Protocol has set in motion new policy instruments that allow combining economic efficiency with environmental effectiveness. Emission trading has now become a reality. 2004 will see the first projects under the Protocol’s Clean Development Mechanism (CDM), which is gaining wider interest in the developing world. This innovative financial mechanism promotes sustainable development in developing countries by channelling private-sector investment into emission reduction projects, while offering industrialised countries credits against their Kyoto Protocol targets. Work to prepare for joint implementation under the Kyoto Protocol is also under way.
The EU is in the process of implementing an agreed emission trading scheme and the difficulties associated with this process show the effort it requires on the part of the affected industries. The scheme is developed so that it can be linked to the Kyoto mechanisms. Similarly, there are many national and corporate CO2 emission trading schemes, including in the US, where the Chicago Climate Exchange provides an interesting tool for the development of a new market.
The first 10 years of the Climate Change Convention have prepared the ground for a major policy breakthrough in the coming years. It has become clear that there is no quick fix. Action will be required on all fronts. Much needs to be done to help the most vulnerable societies to be able to cope with imminent climate change. At the same time, a more intense effort is needed to curb greenhouse gas emissions, in particular from fossil fuels. Emission trading schemes being developed in conjunction with the Kyoto Protocol flexibility mechanisms will help to focus on the most cost-effective measures and provide incentives for much-needed innovations. There are clear indications that the second decade of the Convention will see us make major strides towards meeting this global challenge.
For more information, check the Convention web site at http://unfccc.int:
Data and further information on national policies can be found in the following documents (available on the web site): Report on the national GHG inventory data from Annex I Parties (FCCC/SBSTA/2003/14)
National communications from parties included in Annex I to the Convention – compilation and synthesis of third national communications (FCCC/SBI/2003/7)
Find out more about the UN Environment Programme’s Financial Initiative at http://unepfi.net/fii/
More information on weather and climate-related disasters at the World Meteorological Organisation’s web site at www.wmo.ch
©OECD Observer No 242, March 2004 |
Diabetic NephropathySkip to the navigation
What is diabetic nephropathy?
Nephropathy means kidney disease or damage. Diabetic nephropathy is damage to your kidneys caused by diabetes. In severe cases it can lead to kidney failure. But not everyone with diabetes has kidney damage.
What causes diabetic nephropathy?
The kidneys have many tiny blood vessels that filter waste from your blood. High blood sugar from diabetes can destroy these blood vessels. Over time, the kidney isn't able to do its job as well. Later it may stop working completely. This is called kidney failure.
Certain things make you more likely to get diabetic nephropathy. If you also have high blood pressure or high cholesterol, or if you smoke, your risk is higher. Also, Native Americans, African Americans, and Hispanics (especially Mexican Americans) have a higher risk.
What are the symptoms?
There are no symptoms in the early stages. So it's important to have regular urine tests to find kidney damage early. Sometimes early kidney damage can be reversed.
As your kidneys are less able to do their job, you may notice swelling in your body, most often in your feet and legs.
How is diabetic nephropathy diagnosed?
The problem is diagnosed using simple tests that check for a protein called albumin in the urine. Urine doesn't usually contain protein. But in the early stages of kidney damage—before you have any symptoms—some protein may be found in your urine, because your kidneys aren't able to filter it out the way they should.
Finding kidney damage early can keep it from getting worse. So it's important for people with diabetes to have regular testing, usually every year.
How is it treated?
The main treatment is medicine to lower your blood pressure and prevent or slow the damage to your kidneys. These medicines include:
- Angiotensin-converting enzyme inhibitors, also called ACE inhibitors.
- Angiotensin II receptor blockers, also called ARBs.
As damage to the kidneys gets worse, your blood pressure rises. Your cholesterol and triglyceride levels rise too. You may need to take more than one medicine to treat these complications.
And there are other steps you can take. For example:
- Keep your blood sugar levels within your target range. This can help slow the damage to the small blood vessels in the kidneys.
- Work with your doctor to keep your blood pressure under control. Your doctor will give you a goal for your blood pressure. Your goal will be based on your health and your age. An example of a goal is to keep your blood pressure below 140/90.
- Keep your heart healthy by eating healthy foods and exercising regularly. Preventing heart disease is important, because people with diabetes are more likely to have heart and blood vessel diseases. And people with kidney disease are at an even higher risk for heart disease.
- Watch how much protein you eat. Eating too much is hard on your kidneys. If diabetes has affected your kidneys, limiting how much protein you eat may help you preserve kidney function. Talk to your doctor or dietitian about how much protein is best for you.
- Watch how much salt you eat. Eating less salt helps keep high blood pressure from getting worse.
- Don't smoke or use other tobacco products.
How can diabetic nephropathy be prevented?
The best way to prevent kidney damage is to keep your blood sugar in your target range and your blood pressure under control. You do this by eating healthy foods, staying at a healthy weight, exercising regularly, and taking your medicines as directed.
At the first sign of protein in your urine, you can take high blood pressure medicines to keep kidney damage from getting worse.
Frequently Asked Questions
Learning about diabetic nephropathy:
Living with diabetic nephropathy:
Health Tools help you make wise health decisions or take action to improve your health.
There are no symptoms in the early stages of diabetic nephropathy. If you have kidney damage, you may have small amounts of protein leaking into your urine (albuminuria). Normally, protein is not found in urine except during periods of high fever, strenuous exercise, pregnancy, or infection.
Not everyone with diabetes will develop diabetic nephropathy. In people with type 1 diabetes, diabetic nephropathy is more likely to develop 5 to 10 years or more after the onset of diabetes. People with type 2 diabetes may find out that they already have a small amount of protein in the urine at the time diabetes is diagnosed, because they may have had diabetes for several years.
As diabetic nephropathy progresses, your kidneys cannot do their job as well. They cannot clear toxins or drugs from your body as well. And they cannot balance the chemicals in your blood very well. You may:
- Lose more protein in your urine.
- Have higher blood pressure.
- Have higher cholesterol and triglyceride levels.
You may have symptoms if your nephropathy gets worse. These symptoms include:
- Swelling (edema), first in the feet and legs and later throughout your body.
- Poor appetite.
- Weight loss.
- Feeling tired or worn out.
- Nausea or vomiting.
- Trouble sleeping.
If the kidneys are severely damaged, blood sugar levels may drop because the kidneys cannot remove excess insulin or filter oral medicines that increase insulin production.
Exams and Tests
Diabetic nephropathy is diagnosed using tests that check for a protein (albumin) in the urine, which points to kidney damage. Your urine will be checked for protein (urinalysis) when you are diagnosed with diabetes.
An albumin urine test can detect very small amounts of protein in the urine that cannot be detected by a routine urine test, allowing early detection of nephropathy. Early detection is important, to prevent further damage to the kidneys. The results of two tests, done within a 3- to 6-month period, are needed to diagnose nephropathy.
When to begin checking for protein in the urine depends on the type of diabetes you have. After testing begins, it should be done every year.footnote 1
|Type of diabetes||When to begin yearly testing|
Type 1 diabetes
After you have had diabetes for 5 years
Type 2 diabetes
When you are diagnosed with diabetes
Diabetes present during childhood
After age 10 and after the child has had diabetes for 5 years
An albuminuria dipstick test is a simple test that can detect small amounts of protein in the urine. The strip changes color if protein is present, providing an estimate of the amount of protein. A spot urine test for albuminuria is a more precise lab test that can measure the exact amount of protein in a urine sample. Either of these tests may be used to test your urine for protein.
You will also have a creatinine test done every year. The creatinine test is a blood test that shows how well your kidneys are working.
If your doctor suspects that the protein in your urine may be caused by a disease other than diabetes, other blood and urine tests may be done. You may have a small sample of kidney tissue removed and examined (kidney biopsy).
It is important to check your blood pressure regularly, both at home and in your doctor's office, because blood pressure rises as kidney damage progresses. Keeping your blood pressure at or below your target can prevent or slow kidney damage.
Your doctor might suggest a cholesterol and triglyceride test based on your age or your risk for heart disease. Talk to your doctor about when a cholesterol test is right for you.
For more information, see When to Have a Cholesterol Test.
Diabetic nephropathy is treated with medicines that lower blood pressure and protect the kidneys. These medicines may slow down kidney damage and are started as soon as any amount of protein is found in the urine. The use of these medicines before nephropathy occurs may also help prevent nephropathy in people who have normal blood pressure.
If you have high blood pressure, two or more medicines may be needed to lower your blood pressure enough to protect the kidneys. Medicines are added one at a time as needed.
If you take other medicines, avoid ones that damage or stress the kidneys, especially nonsteroidal anti-inflammatory drugs (NSAIDs). NSAIDs include ibuprofen and naproxen.
It is also important to keep your blood sugar within your target range. Maintaining blood sugar levels within your target range prevents damage to the small blood vessels in the kidneys.
Limiting the amount of salt in your diet can help keep your high blood pressure from getting worse. You may also want to restrict the amount of protein in your diet. If diabetes has affected your kidneys, limiting how much protein you eat may help you preserve kidney function. Talk to your doctor or dietitian about how much protein is best for you.
Medicines that are used to treat diabetic nephropathy are also used to control blood pressure. If you have a very small amount of protein in your urine, these medicines may reverse the kidney damage. Medicines used for initial treatment of diabetic nephropathy include:
- Angiotensin-converting enzyme (ACE) inhibitors, such as captopril, enalapril, lisinopril, and ramipril. ACE inhibitors can lower the amount of protein being lost in the urine. Also, they may reduce your risk of heart and blood vessel (cardiovascular) disease.
- Angiotensin II receptor blockers (ARBs), such as candesartan cilexetil, irbesartan, losartan potassium, and telmisartan.
If you also have high blood pressure, two or more medicines may be needed to lower your blood pressure enough to protect your kidneys. Medicines are added one at a time as needed.
If you take other medicines, avoid ones that damage or stress the kidneys, especially nonsteroidal anti-inflammatory drugs (NSAIDs).
It is also important to keep your blood sugar within your target range to prevent damage to the small blood vessels in the kidneys.
As diabetic nephropathy progresses, blood pressure usually rises, making it necessary to add more medicine to control blood pressure.
Your doctor may advise you to take the following medicines that lower blood pressure. You may need to take different combinations of these medicines to best control your blood pressure. By lowering your blood pressure, you may reduce your risk of kidney damage. Medicines include:
- Angiotensin-converting enzyme (ACE) inhibitors or angiotensin II receptor blockers (ARBs).
- Calcium channel blockers, which lower blood pressure by making it easier for blood to flow through the vessels. Examples include amlodipine, diltiazem, or verapamil.
- Diuretics. Medicines such as chlorthalidone, hydrochlorothiazide, or spironolactone help lower blood pressure by removing sodium and water from the body.
- Beta-blockers lower blood pressure by slowing down your heartbeat and reducing the amount of blood pumped with each heartbeat. Examples include atenolol, carvedilol, or metoprolol.
Continue to avoid other medicines that may damage or stress the kidneys, especially nonsteroidal anti-inflammatory drugs (NSAIDs). And it is still important to keep your blood sugar within your target range, eat healthy foods, get regular exercise, and not smoke.
Treatment if the condition gets worse
If damage to the blood vessels in the kidneys continues, kidney failure may eventually develop. When that occurs, it is likely that you will need dialysis treatment (renal replacement therapy)—an artificial method of filtering the blood—or a kidney transplant to survive. To learn more, see the topic Chronic Kidney Disease.
What to think about
Diabetic nephropathy can get worse during pregnancy and can affect the growth and development of the fetus. If your nephropathy is not severe, your kidney function may return to its prepregnancy level after the baby is born. If you have severe nephropathy, pregnancy may lead to permanent worsening of your kidney function.
If you have nephropathy and are pregnant or are planning to become pregnant, talk with your doctor about which medicines you can take. You may not be able to take some medicines (for example, angiotensin-converting enzyme [ACE] inhibitors or angiotensin II receptor blockers [ARBs]) during pregnancy, because they may harm your developing baby.
Prevention is the best way to avoid kidney damage from diabetic nephropathy.
- Keep your blood sugar levels within your target range. Manage your blood sugar by eating healthy foods, taking your medicine, and getting regular exercise. Your doctor may want you to check your blood sugar several times each day.
- Have yearly testing for protein in your urine.
- If you have type 1 diabetes, begin urine tests for protein after you have had diabetes for 5 years.
- Children with type 1 diabetes should begin yearly urine protein screening when they are 10 years of age and have had diabetes for 5 years.
- If you have type 2 diabetes, begin screening at the time diabetes is diagnosed.
- Keep your blood pressure under control with medicine, diet, and exercise. Learn to check your blood pressure at home.
- Stay at a healthy weight. This can help you prevent other diseases, such as high blood pressure and heart disease.
- Follow the nutrition guidelines for hypertension (including the Dietary Approaches to Stop Hypertension, or DASH, diet).
- Do not smoke or use other tobacco products.
If you already have diabetic nephropathy, you may be able to slow the progression of kidney damage by:
- Avoiding dehydration by promptly treating other conditions—such as diarrhea, vomiting, or fever—that can cause it. Be especially careful during hot weather or when you exercise.
- Reducing your risk of heart disease. Lifestyle changes such as eating a heart-healthy diet, quitting smoking, and getting regular exercise can help reduce your overall risk of developing heart disease and stroke.
- Treating other conditions that may block the normal flow of urine out of the kidneys, such as kidney stones, an enlarged prostate, or bladder problems.
- Not using medicines that may be harmful to your kidneys, especially nonsteroidal anti-inflammatory drugs (NSAIDs). Be sure that your doctor knows about all prescription, nonprescription, and herbal medicines you are taking.
- Avoiding X-ray tests that require IV contrast material, such as angiograms, intravenous pyelography (IVP), and some CT scans. IV contrast can cause further kidney damage. If you do need to have these types of tests, make sure your doctor knows that you have diabetic nephropathy.
- Avoiding situations where you risk losing large amounts of blood, such as unnecessary surgeries. Do not donate blood or plasma.
- Lowering your blood pressure, because high blood pressure can make kidney damage even worse.
- Checking with your doctor to find out if it is safe for you to drink alcohol. Limiting alcohol can lower your blood pressure and lower your risk of kidney damage.
Other Places To Get Help
- American Diabetes Association (2015). Standards of medical care in diabetes—2015. Diabetes Care, 38(Suppl 1): S1–S93.
Other Works Consulted
- American Diabetes Association (2015). Standards of medical care in diabetes—2015. Diabetes Care, 38(Suppl 1): S1–S93.
- Brownlee M, et al. (2011). Complications of diabetes mellitus. In S Melmed et al., eds., Williams Textbook of Endocrinology, 12th ed., pp. 1462–1551. Philadelphia: Saunders.
- De Ferranti SD, et al. (2014). Type 1 diabetes mellitus and cardiovascular disease: A scientific statement from the American Heart Association and American Diabetes Association. Diabetes Care, published online August 11, 2014. DOI: 10.2337/dc14-1720. Accessed September 4, 2014.
- Molitch ME, Genuth S (2006). Complications of diabetes mellitus. In DC Dale, DD Federman, eds., ACP Medicine, section 9, chap. 3. New York: WebMD.
- Parving H, et al. (2008). Diabetic nephropathy. In BM Brenner, ed., Brenner and Rector's The Kidney, 8th ed., vol. 2, pp. 1265–1298. Philadelphia: Saunders Elsevier.
- Shlipak M (2010). Diabetic nephropathy: Preventing progression, search date November 2009. BMJ Clinical Evidence. Available online: http://www.clinicalevidence.com.
- Tuttle KR, et al. (2014). Diabetic kidney disease: A report from an ADA consensus conference. Diabetes Care, 37(10): 2864–2883. DOI: 10.2337/dc14-1296. Accessed January 6, 2015.
Primary Medical Reviewer E. Gregory Thompson, MD - Internal Medicine
Kathleen Romito, MD - Family Medicine
Specialist Medical Reviewer Tushar J. Vachharajani, MD, FASN, FACP - Nephrology
Current as ofApril 7, 2015
Current as of: April 7, 2015
To learn more about Healthwise, visit Healthwise.org
© 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. |
Almost anything can be recycled, but just because something is recyclable does not necessarily mean that it will be recycled. This depends less on what something is made out of, and more on the policies of local recycling agencies. Small agencies and garbage collection companies tend to recycle less, because they don't have the facilities for processing numerous recyclables, and they don't collect enough to justify the expense of contracting some recycling services out. For people who are really concerned about recycling, it may be necessary to drop recyclables off at several locations. For example, plastic grocery bags may not be accepted in curbside recycling, but the grocery store might have a collection point for them.
In terms of plastics, most plastics are in fact recyclable, although some are harder to recycle than others. Plastics are marked with numerical codes which indicate what kind of plastic was used in the manufacturing of the product. Recycling companies usually list the codes they will accept for recycling, and plastics marked with other codes will not be recycled by the collecting agency. However, some communities have collection points for plastics not handled by the recycling company, and it is also possible to mail them to a central location.
Glass is all fully recyclable, although recyclers do need to sort out different kinds of glass. Again, a recycling company may dictate the types of glass it will accept. If a recycling company excludes a particular glass type, there may be a local resource which will handle it. For example, a junkyard or auto body shop might take auto glass.
Paper is also highly recyclable. Many recycling companies today take all paper and cardboard, and do not require separation. Others may ask for glossy materials to be recycled separately. Electronics and appliances like computers, cell phones, fax machines, ovens, and so forth can be recycled as well, although they cannot be put in curbside pickup. Technically considered “electronic waste,” electronics can be processed at a special facility to break them down into component recyclable parts, while appliances need to be processed by specialized scrap yards.
Metals can be recycled, although some specialty products may need to be taken to a scrap yard. Some metals actually have monetary value; copper, for example, can be sold by weight. Products like tires and motor oil can be recycled too, although many people are not aware of this. They may need to be picked up by a specialty company. Many gas stations and auto shops will accept motor oil and tires for recycling, sometimes for a small fee.
Fabrics can be recycled, although, again, they may need to be processed by a special company. Biodegradeables like yard waste and food waste are not recyclable, but they can be composted. People who lack the space or inclination for composting may be able to arrange for pickup by a company which composts commercially.
It is always good to ask a recycling company directly if there are questions about a recyclable. If the company will not accept it, it may have suggestions about potential recycling options. |
Crickets are a variety of insects with more than 900 species under the order Orthoptera. They are either brown or black, and they have four wings, with their front wings covering their hind wings when standing. Their antennae run almost the entire length of their body. They are omnivorous, eating mostly decaying fungi and plant material.
To avoid predators, crickets are primarily nocturnal and prefer dark spaces such as beneath rocks and inside logs. Different species of crickets are found all over the globe, with more than 120 species in the United States alone. They live in just about every conceivable biome, from swamp and marshlands to rain forests, mountains and deserts.
If crickets live in a climate that is too moist, a fungus can begin to spread over their bodies. They tend to lay their eggs in moist areas, but they cannot live there for long. In experiments, they prefer an environment with grass and soil over one with pebbles and sand.
Crickets thrive ideally at a temperature from 82 to 86 degrees Fahrenheit. They can live in climates with highs in the 70s, but their functions take longer, such as laying eggs and reproducing. At temperatures above 96, they start to die.
Male crickets sing to attract females as part of the courtship ritual. Studies have shown, though, that females tend to prefer the songs of younger males, which are distinguished by their higher volume and pitch. |
There can be two reasons to compare two columns in excel either we have to find the matches or to find the differences. Matches are the data points that are present in both the compared columns. Differences are the data points that are in one column but not present in other column being compared.
There are several ways to compare two columns in an excel sheet. In this section, we will discuss some inbuilt functions, and formulas to compare two columns to discover the matches or the differences. The events that we will discuss here are given below.
Compare Two Columns in Excel:
Steps to Compare Two Columns in Excel 2016:
For Exact Row Match:
Look at the picture below, we have column 1 and column 2 to be compared. We will compare these columns by typing a simple formula “=A2=B2” in a cell just beside the first row, of both the compared columns and drag it down till the entries in columns doesn’t end up.
This formula returns TRUE if a row has exact same data in both the columns and FALSE if a row has different data in both the columns.
Compare Two Columns Using IF()
We have one more variation of this, we can find the exact match using IF() function this allows us to get an organized result. The formula to compare columns for an exact row match using IF():
=IF(A2=B2, “Match”, “Different”) // press enter and drag it down till the entries of compared columns end up.
This formula returns “Match” if an exact match is found and “Different” if the row has different data in both the column.
Highlight the Rows with Exact Match
Now, we will compare two columns for an exact row match using an inbuilt function.
Step 1: First select the columns to be compared and at Home tab under Styles group, we have Conditional Formatting drop-down menu. Here, select New Rule.
Step 2: A New Formatting Rule dialog box pop out. In the “Select a Rule Type:” box click on “Use a formula to determine which cells to format”.
In “Edit the Rule Description” box, type the formula to identify the cells that have to be formatted.
So, we will type the formula to find the exact match. Also, decide the Format of the cells.
If the formula results True, the cells will be formatted as you describe. Observe the picture below, where the rows which have an exact match for both the columns have been formatted as described in the New Formatting Rule box.
Highlight for Matches or Differences
Highlight the Matches
Till now we have compared the columns row by row. Now, we compare columns to find if a data point in column 1 is present anywhere in column 2. This will be a column-wise comparison.
Step 1: Select the columns that you want to compare and at the Home tab, in the Styles group, we have Conditional Formatting drop-down menu. Here, click on Highlight cells Rules>Duplicate Values.
Step 2: Duplicate Values dialog box will pop out. Here, in the first drop-down menu make sure you have selected Duplicate and in the second drop-down menu select the way you want to format the cells containing the duplicates.
Step 3: See in the picture below, the data points in Column 1 that are also present in Column 2 get formatted in the red background as we have described in the Duplicate Values dialog box.
Highlight the Differences
For highlighting the differences i.e. highlighting the data points that are present in Column 1 but not in column 2 and the data points that are in column 2 and not in column 1 follow the steps below:
Step 1: Select the columns to be compared and at the Home tab in the Styles group click the Conditional Formatting dropdown menu. Further, click on Highlight Cells Rules > Duplicate Values.
Step 2: Duplicate Values dialog box will pop out. Here, make sure in the first drop-down menu Unique is selected and select the format for the cells having different data points.
Step 3: See in the picture below, the data points in column 1 that are not present in column 2 are highlighted with the selected format. Similarly, the data points in column 2 that are not present in column 1 are also highlighted with the selected format.
Compare Two Columns Using VLOOKUP
Step 1: Besides the first row, of the columns to be compared type the VLOOKUP formula.
=VLOOKUP(A2, $B$2:$B$12, 1,0) // Drag the cell containing the formula till the entries of the compare columns end up.
This formula would compare lookup value with each data point from the range B2:B12 and return the data points of column 1 from the range if its exact match has been found.
Observe the picture above, VLOOKUP return #N/A for the lookup values which are not present in the range.
Step 2: To organize the result in a better way we will modify the formula:
=IFERROR(VLOOKUP(A2, $B$2:$B$12, 1,0), “–” )
This modified formula would display “—” in place of “#N/A”.
Compare Two Columns Using VLOOKUP (Pull Up the Related Data if Match Found)
Now, we know that the VLOOKUP formula returns the data points of the column mentioned in the formula. So, the formula to pull up the related data for the lookup value in VLOOKUP is:
=VLOOKUP(C2, $B$2:$B$12, 2, 0)
This formula looks for a data point in cell C2 in the range A2:B12 and return the data point from the second column in the range if the exact match is found.
But, this formula returns the data point form the second column in the range only if the exact match is found. For example, when the formula checks for a data point in C3 i.e. Davis, we do have Davis in lookup range but along with the last name of Davis i.e. Clawson. But, the formula returns #N/A for it. Similar is the case with Annie, we do have Annie in the lookup range but, along with its last name i.e. Rotus. But, the formula returns #N/A for Annie also.
We can also get a result for it by modifying the formula above:
As we have appended “*” along with lookup value, it will consider the text after the lookup value and will return the result as shown in the picture above.
So, this is all about how you can compare the two columns in the excel sheet to find matches or differences. But, you can explore more and find out the other methods for the same.
Leave a Reply |
The Future Is... Carbon Capture
Carbon capture – you may have heard about it, but how does it work? Mei Chia, global business leader for CO2 storage and hydrogen solutions at Honeywell, shares what you need to know in this podcast episode.
When industrial plants operate, they produce carbon dioxide (CO2).
What if instead of becoming a greenhouse gas in the air, the carbon dioxide were stored permanently underground?
That process is known as carbon capture.
Mei Chia joined "The Future Is..." podcast to explain how carbon capture works and why it's crucial for sustainability efforts in a warming world, especially for organizations with industrial plants and operations.
She used the analogy of a washing machine to help describe the carbon capture process.
"It's like putting a great big washing machine on the back of those [industrial] plants and capturing the carbon dioxide so that it can be sent to pipelines or sequestration, or sent for utilization," Chia said on the podcast.
Listen to the full episode to learn more about carbon capture technology. |
What is Parkinson's disease?
Parkinson's disease is chronic, progressive disease, which means it is a long-term condition in which the symptoms worsen over time. It causes several characteristic symptoms, including tremors, slowness of movement (bradykinesia) and limb and trunk rigidity. It is a motor system disorder. As symptoms progress, patients with Parkinson's disease may have difficulty performing basic tasks, including talking and walking.
The condition was identified in 1817 by James Parkinson, a British physician who first described the principal symptoms of the disease.
Parkinson's disease is the second most common neurodegenerative disease after Alzheimer's disease, according to the National Parkinson Foundation (NPF). Some estimates suggest that roughly 1 million people in the United States have Parkinson's disease. However, the disease is often not diagnosed or misdiagnosed, so this figure may not be accurate. The NPF estimates that Parkinson's disease affects one in every 100 people over the age of 60, and the average age of onset is 60. The National Institute of Neurological Disorders and Stroke (NINDS) estimates that “early-onset” (in which symptoms develop before the age of 50) Parkinson's disease occurs in about 5 to 10 percent of people with Parkinson's disease. These cases are often inherited, and scientists have linked several cases to identified gene mutations. In rare cases, symptoms may appear in people under 20, a condition known as juvenile parkinsonism.
For unknown reasons, more men than women are affected. The NINDS estimates that 50 percent more men than women have Parkinson's disease.
In healthy people, cells within a small region of the brain stem known as the substantia nigra produce and release a neurotransmitter called dopamine, which controls movement and balance in the body. Dopamine is vital to proper central nervous system functioning and helps electrochemical signals move from one neuron to another.
However, patients with Parkinson's disease experience a destruction of the cells in the substantia nigra. By the time 60 to 80 percent of these cells are destroyed, there is a sharp decline in dopamine production triggering the symptoms of Parkinson's disease.
Recent studies also have found that destruction of the nerve endings that produce the neurotransmitter norepinephrine may contribute to many of the nonmotor-related symptoms of Parkinson's disease. These include fatigue and blood pressure problems. Norepinephrine is the chief chemical messenger within the sympathetic nervous system (the part of the autonomic nervous system that controls automatic functions in the body).
Some patients who appear to have Parkinson's disease may actually have a separate disorder that mimics Parkinson's disease. These disorders may be grouped with Parkinson's disease and are collectively known as parkinsonism. Examples of non-Parkinson's disease parkinsonism include:
- Drug-induced parkinsonism. Caused by taking certain medications, including antipsychotics, calcium-channel blockers, the dopamine blocker metoclopramide and the blood pressure medication reserpine.
- Vascular parkinsonism. Results from blockage of small blood vessels to the brain.
- Essential tremor. A progressive condition that tends to run in families and usually affects both hands, particularly when the hands are moving. It may affect the head as well. Patients with this condition usually do not have other parkinsonian features.
- Lewy body dementia. Condition marked by early dementia, hallucinations and fluctuations in cognitive status.
- Multiple system atrophy. An illness that causes symptoms similar to Parkinson's disease, but it is much less common and tends to develop more rapidly. The cause of this neurodegenerative disease is unknown.
- Other conditions. Parkinsonian symptoms may appear in patients with other neurological disorders, including Wilson's disease, Huntington's disease, Alzheimer's disease, spinocerebellar ataxias and Creutzfeldt-Jakob disease.
How is it diagnosed?
In diagnosing Parkinson's disease, a physician will compile a thorough medical history and perform a complete physical examination. To date, no blood tests or other laboratory tests have been shown to accurately diagnose Parkinson's disease.
While diagnosis may be difficult, a neurological examination may provide clues to the presence of the illness. The physician will also look for at least two of the following three signs: tremor when the patient is at rest, slowness of movement (bradykinesia) or rigidity. Various tests can be used to reveal the presence of these symptoms. For example, patients may be asked to tap a finger and thumb together or tap their foot to look for signs of bradykinesia. Meanwhile, postural instability is tested by asking patients to retain their balance while they are pulled backwards by the physician.
Physicians may recommend a brain scan, using imaging technology such as magnetic resonance imaging (MRI), to rule out other conditions that can cause similar symptoms.
Parkinson's disease is diagnosed when a combination of a patient's medical history, symptoms and test results indicate illness that cannot be explained by other factors or diseases.
How is Parkinson's disease treated?
There is no cure for Parkinson's disease. Medications are usually recommended and can substantially reduce a patient's symptoms. Most medications are aimed at increasing the levels or reducing the breakdown of dopamine in the brain. However, it is not possible to take dopamine in pill form because the chemical is broken down before it can reach the brain.
Most patients receive a combination of the drugs levodopa and carbidopa to treat the symptoms of Parkinson's disease. Neurons inside the body convert levodopa to dopamine, and carbidopa helps to ensure that this process does not happen until the levodopa reaches the brain. Carbidopa also helps reduce side effects that are associated with levodopa, such as nausea and vomiting. Other side effects of levodopa include dyskinesia (uncontrollable movements), sudden sleep onset, hallucinations and psychosis.
Most patients who take this medication gain some benefit from it, according to the National Institute of Neurological Disorders and Stroke (NINDS). Usually, these drugs have the most significant impact on rigidity and slowness of movement. They are also effective in reducing tremors. However, they may have little or no effect on other symptoms, such as postural instability or nonmotor-related symptoms. They also do not repair or replace damaged nerve cells in the brain, nor do they stop progression of the disease.
The benefits of levodopa therapy are usually experienced soon after beginning the medication, although the dosage may be gradually increased over time in order to be most effective. Levodopa can dramatically increase the quality of life for people with Parkinson's disease. However, it is not a cure for Parkinson's disease and cannot slow the progression of the disease. Eventually, the effectiveness of levodopa therapy may decrease, in which case patients usually experience a gradual worsening of symptoms (called the “wearing off effect”) or periodic attacks of more severe symptoms (called the “on-off effect”).
Other drugs may be used to treat the symptoms of Parkinson's disease or increase the effectiveness of levodopa. These may include:
Dopamine agonistsThese drugs treat Parkinson's disease by mimicking the effects of dopamine on target cells. They are sometimes used in conjunction with levodopa. Although they are generally considered safe, the side effects from this type of medication can include sleep disorders, hallucinations, confusion and nightmares. They have also been linked to compulsive behavior (e.g., gambling, hypersexuality, excessive shopping) in some patients.
COMT inhibitors and MAO-B inhibitorsThese drugs inhibit the breakdown of dopamine caused by the enzymes catechol-O-methyltransferase (COMT) and monoamine oxidase B (MAO-B). They are often used to extend the effectiveness of levodopa. The most common side effect from these medications is diarrhea. People who are taking certain other drugs (e.g., the antidepressant fluoxetine or the pain medication mepiridine) should consult their physician before taking certain MAO-B inhibitors as they may have harmful interactions.
AnticholinergicsThese drugs are primarily used to treat tremors and muscle rigidity. They work by reducing the effects of the neurotransmitter acetylcholine. It is thought that excess levels of acetylcholine increase neuron activity in the brain and that, by reducing these levels, anticholinergics may be effective in controlling tremors and muscle rigidity. However, anticholinergics are only effective in roughly half of the patients who use them, according to the NINDS. Even for people who respond well to anticholinergics, the medication usually offers limited benefits for a short period of time. Side effects may include dry mouth, constipation, urinary retention, hallucinations and delirium.
AmantadineThis drug may be used to treat the symptoms of Parkinson's disease, as well as some of the side effects of levodopa, such as dyskinesia. Amantadine is an antiviral drug that is sometimes used to treat certain types of influenza. It is not fully understood how the drug works to control symptoms of Parkinson's disease, although it is thought to increase the effects of dopamine. Side effects may include insomnia, mottled skin and hallucinations.
It is important to note that not all patients are good candidates for surgery. Deep brain stimulation has been shown to be more effective on younger patients with Parkinson's disease and is generally not suitable for patients who are in poor overall health. However, some older patients in good health have benefited from deep brain stimulation.
Another form of surgery, known as a pallidotomy or thalamotomy (depending on the area of the brain targeted), involves the destruction of parts of the brain that are associated with the symptoms of Parkinson's disease. During this procedure, surgeons use radio frequency energy to heat and destroy the globus pallidus (pallidotomy) or the thalamus (thalamotomy). Abnormal activity in these two small areas in the brain is believed to contribute to movement problems in people with Parkinson's disease.
Because of the risks with radiofrequency destruction of the globus pallidus or thalamus, deep brain stimulation is usually a preferred treatment method over pallidotomy and thalamotomy for patients with advanced Parkinson's disease.
Surgery may improve many of the symptoms of Parkinson's disease. However, patients may find that they still have to rely on medication to effectively control symptoms. They may also still have to take medication to treat symptoms that are not affected by surgery, such as nonmotor-related symptoms.
Physical therapy and occupational therapy can help patients learn new ways to cope with their symptoms. Physical therapy includes a mixture of exercise, massage and other treatments to maintain strength and flexibility. A physical therapist is likely to teach the patient exercises that can be performed at home to strengthen and retrain muscles. This helps patients improve balance and reduce fatigue. Occupational therapy is similar to physical therapy, but it focuses on improving patients' fine motor skills so they can better accomplish daily activities, such as dressing and bathing.
Patients are encouraged to engage in complementary or supportive therapies, such as eating a well-balanced diet and exercising regularly. Eating a diet that is high in fiber and consuming plenty of fluids can help reduce constipation, which can be associated with Parkinson's disease. Additionally, patients who remain active may better cope with the disabling nature of the disease.
Finally, some patients and their caregivers may find that support groups or individual counseling are helpful when coping with Parkinson's disease. This type of therapy may be a valuable outlet for both the patient and the patient's caregiver to discuss feelings of frustration or depression.
Anticholinergics; antihistamines; antidyskinetics; antitremor drugs, such as
amantadine; or antiparkinson medications (dopamine stimulators, dopamine precursors), including
bromocriptine, levodopa and carbidopa. Selegiline is prescribed
to maintain maximal effectiveness of levodopa and
carbidopa. All these decrease tremors and reduce muscle
rigidity, but they often have significant side effects.
Symmetrel (Amantadine), Seroquel (Quetiapine), Ambien (Zolpidem)
Remain as active as possible, and rest often. Physical abilities vary greatly between persons with this disease. The only restrictions are those imposed by muscle rigidity.
What might complicate it?
Complications of the disease include difficulty moving and speaking, confusion, depression, aspiration pneumonia, weight loss, and falls and injury.
The symptoms of Parkinson's disease can be relieved or controlled. There is, however, no cure, and all individuals will continue to deteriorate. Life expectancy is not significantly reduced unless onset is under 50 years of age.
Differential diagnoses may include Wilson's disease, striatonigral degeneration, essential tremor, Creutzfeldt-Jakob disease, Huntington's disease, Shy-Drager syndrome, progressive supranuclear palsy, cortical basal ganglionic degeneration, stroke, and hydrocephalus.
Type of rehabilitation
Physical therapy, including range of motion exercises (ROM), occupational therapy, and speech therapy may be useful in special cases, one to five times a week, for limited periods.
Neurologist, physical therapist, occupational therapist and speech therapist.
Last updated 2 July 2015 |
Along with a heart attack, stroke is one of the most common causes of death in industrialized countries.
root cause. A stroke is caused by a lack of blood supply to the brain. Deposition (arteriosclerosis), a blood clot (thrombus) or constipation (embolism) often results in narrowing or occlusion of one or more blood vessels. Rarely, a crack (accident or calcification) in a vessel can trigger a stroke. As a result, there is reduced blood flow to the brain, with the consequence of a lack of oxygen in the brain cells. If the condition persists for more than a few minutes, the affected brain cells begin to die. A stroke is an acute life-threatening illness and must be treated immediately with a Stroke Unit in a hospital.
Risk factors include nicotine consumption, chronic high blood pressure, obesity, diabetes mellitus, chronic high blood cholesterol. Increasing age and genetic disposition also play an important role.
The main symptoms of a stroke are sudden, very severe headaches, paralysis, speech and vision problems.
StrokeUnits are special stations of hospitals providing maximum care. They have the necessary equipment and specially trained staff. Immediate computed tomography (CT) or magnetic resonance imaging (MRI) confirms whether it is a stroke or not. Early treatment on a StrokeUnit prevents the likelihood of permanent damage and lowers mortality.
A stroke caused by a vascular occlusion requires immediate medicinal thrombolysis. This tries to open the lock again. A drug is introduced into the bloodstream that contains enzymes that break down the thrombus (vascular occlusion). Above a certain size of the blood clot, an attempt is made to remove it by means of a catheter (thrombectomy).
If the stroke is caused by bleeding in the brain, the first step is to stop the bleeding with medication. If this is not possible or if the pressure in the skull is too high, the skullcap is opened with an emergency operation and the pressure is relieved.
Once the causes of the stroke have been remedied, the person affected is permanently monitored on the StrokeUnit. The StrokeUnit has a whole team of doctors from various specialties at its disposal.
Depending on how quickly and well the initial measures have been taken and how quickly the person concerned has been transferred to a StrokeUnit, the person concerned can be affected from minor impairments to serious physical and mental health problems. The earlier a stroke is treated properly, the greater the chances that the extent of the consequential damage can be kept low.
If you have any further questions on this or any other topic, please feel free to send us an email. We are happy to help! |
The goal of every Indian parent is to mold their child into an IAS or IPS official. The abilities and duties of these authorities are only dimly known, though. The distinction between constitutional law and administrative law will be covered in this article. How the two are connected, and what are the potential applications of choosing Administrative Law as an optional subject on the UPSC test.
What is Administrative Law?
The legislation that controls administrative actions is known as administrative law. Ivor Jennings claims that the law about administration is known as administrative law. It establishes the structure, authority, and responsibilities of administrative authorities. It covers the legal obligations of public authorities, the authority of regular courts to oversee administrative authorities, the rule-making authority of administrative bodies, and the quasi-judicial function of administrative agencies. It controls the executive branch and makes sure that it deals fairly with the people.
A subset of public law is administrative law. It discusses how people interact with the government. It establishes the administrative and quasi-judicial agencies’ organizational framework and power structure to implement the law. It establishes a control system through which administrative agencies maintain their boundaries and is primarily concerned with official acts and processes.
Reasons why it was developed
- Concept of a welfare state – Government activities expanded as the States transitioned from being laissez-faire to a welfare state, necessitating more regulation of those activities. Consequently, this area of law evolved.
- Inadequacy of Legislature – The daily, ever-changing requirements of society cannot be addressed by the legislature in its limited time. Even if it does, the drawn-out and laborious legislative process would make the regulation useless since the needs would have changed by the time it was put into effect.
- The inefficiency of the Judiciary – The court process for making decisions is extremely slow, expensive, complicated, and formal. Furthermore, it is impossible to quickly dispose of suites due to the overwhelming number of cases that are already scheduled. There was a demand for tribunals as a result.
- Scope of Experiment – Administrative law can be modified to meet the needs of the State apparatus because it is not a codified law. It is hence more adaptable. The strict legislative processes do not need to be followed again.
Indian Administrative Law
In India, it makes an effort to restrict administrative acts by limiting delegated legislation and judicially reviewing administrative discretionary decisions. It also outlines the composition and structure of tribunals.
- Delegated Legislation – Legislation created by an organ other than the legislature that has had the duties of the legislature delegated to it is known as delegated legislation. The executives and administrators are given this ability to address the day-to-day practical problems they encounter. Although the practice of delegated legislation is not unfavorable, there is little chance of power abuse, hence protections are required.
- Judicial Review – Judicial review of administrative action becomes a crucial component of administrative law. An administrative authority must possess the freedom to make decisions at the moment. The choices made while using these discretionary powers must be sound, though. The ‘Rule of Law’s answer to the problem of discretion is reasonableness. It brings ideals of openness, uniformity, and predictability associated with the “rule of law” closer to discretionary authorities. Administrative activity and discretion are monitored and regulated through the judicial review procedure.
The validity of the administrative action is ensured by judicial review, which also helps to maintain the administrative authority within reasonable limits. The Court questions whether the administrative body followed the law when acting. However, the courts are unable to and do not replace the administrative authority’s judgment with their own.
Courts will therefore consider whether there was a failure to exercise judgment, whether there was an abuse of discretion if there was any illegality, and/or whether there was any procedural irregularity, in a case challenging administrative acts.
- Tribunals – Tribunals are established to settle complaints and handle conflicts more quickly. In a tribunal, cases are decided by a Bench made up of members who are both judicial and non-judicial. But tribunals don’t take the place of courts. Numerous tribunals in India were established using Central Acts.
Scope of Administrative Law
It is currently the most notable phenomenon in the welfare state and is expanding in relevance and attention. For the individuals in charge of carrying out administration as well as for law students, knowledge of administrative law is crucial.
- Unlike the IPC or the law of contracts, administrative law is not codified. The foundation is the constitution.
- It is essentially a judge-made legislation and a subset of public law that addresses the constitution and the delegation of authority.
- It is concerned with the structure and authority of administrative and quasi-administrative authorities.
- The official action and the process by which it is obtained are the main topics of administrative law. Examples include creating rules, implementing rules, monitoring actions, and simple administration.
- It has a judicial review control structure that ensures the effectiveness and confinement of administrative powers.
- Administrative law derives its authority from statute and constitutional law.
- Administrative law fosters transparency, openness, and honest government that is more populist and related to both individual rights and public interests.
- The study of administrative law is a tool, not a goal in itself.
- Everywhere and whenever a person is the victim of the arbitrary use of governmental authority, administrative law begins and develops. Administrative law is a subfield of the sociology of law rather than the philosophy of law.
- It is the body of legislation that controls how the government’s administrative authorities operate. Rule-making, rule adjudication, the enforcement of certain regulations, and the associated agenda are all examples of government agency activity.
Administrative law is the body of legislation that controls how the Executive functions and guards against any abuse of authority on the part of the Executive or any of its agents. It is a relatively young area of law that will keep developing by the shifting demands of society. The goal of administrative law is to bring the Executive’s discretionary powers into compliance with the “Rule of law,” not to eliminate them. |
This unit provides an in depth look at the interplay between losses in privacy and gains in convenience that accompany the ever-expanding use of and reliance on digital media and technology in our lives. The aim of the unit is not to convince students of a specific stance; rather, it is to provide an opportunity for students to look critically at the ways in which privacy’s role in our modern lives has changed and to think about taking intentional action regarding their own use of digital media. Adolescents (and people in general) often engage in activities without fully understanding the consequences or repercussions. By analyzing ways in which the use technology may have a lasting impact on their privacy, students might choose to change their practices or they may realize there is no conflict with their current usage.
The majority of class time during this unit will be committed to discussions of topics that arise from a wide variety of informational and literary texts that delve into the topics of privacy and digital technology in our lives. While there is much current information being published on these topics, the unit also pulls from some historical sources and asks students to consider changes to privacy over previous eras. As the core text, George Orwell’s 1984 provides a solid foundation on issues concerning privacy. This novel elucidates two interrelated aspects of privacy that this unit seeks to develop: first, the internal thoughts that we develop and contemplate without outside influence; and second, the freedom from being observed, accessed, and controlled by outsiders.
Throughout this unit, students will produce short argumentative pieces drawing evidence from the texts read for and discussed in class. Classroom instruction on producing a claim, drawing and citing evidence from a text, and using reasoning to develop a stance will be addressed through lessons in the unit. The short pieces of writing students produce throughout the class will culminate in a final argumentative essay weighing the interplay and value of privacy and convenience in our digital lives. |
Law and mechanism in the cosmos
We have reached an interesting moment in our journey to understand the origin and development of inequality and prejudice. If we look backwards, we find Platonic solids and a mathematical and geometrical structure of the universe, tied to the model of the Great Chain of Being and to plenitude and continuity from the Timaeus. If we look forward, we see the idea of the bounded universe of concentric circles dissolving, and the discovery of laws of nature in the universe that will replace emanation as the explanation of nature. Looking both backwards and forwards is the famous astronomer and mathematician Johannes Kepler (1571-1630).
In looking forwards, he discovered three laws, known as Kepler’s three laws.
The first two were published in his New Astronomy of 1609.
The first law says that the orbit of a planet is not a circle but is elliptical. This avoids the need for epicycles, and it places the center of the solar system where the sun is, and not at the center of the earth’s orbit. It was not clear to Kepler what force would steer a planet at the correct speed along an elliptical curve.
The second law said that planets do not orbit at a uniform speed but go more quickly when they are nearer the sun (perihelion) and slower when further away from the sun (aphelion).
The third law―perhaps the most important―stated that there is a mathematical law that determines the distance a planet is from the sun and the time it takes to complete an orbit. It is called the law of inverse square. And it is this: the square of the orbital periods of the planets goes up in step with the whole of their distances from the sun. So, for example, a planet that is four times (22) as far as another planet from the sun takes 8 times (23) as long to complete an orbit.
Remember that in the past such things had been explained by emanation, angels, and gods. Now Kepler was explaining the motion of the planets by means of mathematical laws. And for Kepler, this meant that God was a mathematician, and that the universe was ordered or systematic.
To employ this new-found order, Kepler looked backwards to Plato and to the Platonic solids, to provide us with perhaps the most ingenious and beautiful structure of the old cosmos using modern mathematical sensibilities. He devised a model of the Harmonices mundi or Harmony of the World (1619) (which included his third law of motion). It was the beauty of the mathematics of his new cosmos that now gave the cosmos its harmony, its harmonious order, and its ratio and proportion. He found the ratios between the planets to correspond to musical ratios. He then set out to answer the question as to why there were six planets. He speculated that the beauty of the mathematics in the universe was carried by the five Platonic solids which held each planet in relation to the others. Each Platonic solid, or regular shape, was therefore placed within the next one until he achieved this famous diagram of the harmony of the universe.
(see Kepler’s solar system from his Mysterium Cosmographicum, 1596 via Wikimedia Commons and Kepler’s Platonic solid model of the Solar System from Mysterium Cosmographicum.)
With Kepler then we get the beginning of laws of motion tied to the ancient idea of Platonic solids. It was to the laws that science would turn its attention.
Next, we turn to Galileo Galilei (1564-1642). His achievements are remarkable, and here is a list that mentions them all too briefly. I have divided them into three groups; those that support acentrism; those that contribute new laws; and the famous story of his trial.
With regard to the growing awareness of acentrism that was undermining the Aristotelian universe, in 1604 Galileo proved that a new star was much further from the earth than the model of concentric circles allowed. And a new star proved that the heavens were after all changeable. This offended the logic of mastery and identity which held that what was absolutely true was also absolutely unchangeable. Tycho Brahe (1564-1601) has also discovered a new star in 1572.
By 1609 Galileo heard reports of a new instrument called the telescope. He built his own refractory telescope, magnifying natural vision by 1000x larger and 30x nearer. He described what he saw in his book Sidereus nuncius, or The Sidereal [Starry] Messenger (1610). It changed forever the view of the universe.
UNFOLDING GREAT AND MARVELLOUS SIGHTS,
AND PROPOSING THEM TO THE ATTENTION OF EVERY ONE,
BUT ESPECIALLY PHILOSOPHERS AND ASTRONOMERS,
BEING SUCH AS HAVE BEEN OBSERVED BY GALILEO GALILEI, A GENTLEMAN OF FLORENCE, PROFESSOR OF MATHEMATICS IN THE UNIVERSITY OF PADUA, WITH THE AID OF A TELESCOPE LATELY INVENTED BY HIM.
Respecting the Moon’s Surface, an innumerable Number of Fixed Stars, the Milky Way, and Nebulous Stars, but especially respecting Four Planets which revolve around the Planet Jupiter at different distances and in different periodic times, with amazing velocity, and which, after remaining unknown to every one up to this day, the Author recently discovered, and determined to name the MEDICEAN STARS.
He showed that the moon’s surface was not smooth (it had been thought to be smooth like a pearl);
Of the moon he stated that
‘I feel sure that the surface of the Moon is not perfectly smooth, free from inequalities and exactly spherical, as a large school of philosophers considers with regard to the Moon and the other heavenly bodies, but that, on the contrary, it is full of inequalities, uneven, full of hollows and protuberances, just like the surface of the Earth itself, which is varied everywhere by lofty mountains and deep valleys.’
He also measured the height of some of the moon’s mountains, measurements are still agreed with today.
Cohen says that in order to see how the moon was thought about up to this point, one could look at Dante’s Divine Comedy here, in Book 2 of Paradiso, Dante ascends with Beatrice, towards Paradise, first reaching the moon, the first of the heavenly bodies. Dante the pilgrim asks Beatrice to explain why from the earth there appear to be dark spots when, having arrived at the moon, it is smooth and flawless, or an ‘eternal celestial pearl … as brilliant, hard, and polished as a diamond struck by a ray of sunlight.’ Beatrice’s answers that it is another example of how, even when he uses his own senses, man frequently demonstrates the weaknesses of mortal or human reasoning.
Dorothy L Sayers notes here that the term ‘eternal celestial pearl’, meaning the moon, in the original Italian is l’eterna margarita’ (not the pizza). Margarita, meaning pearl, comes from Persian and Sanskrit, and Sayers says here that the translation of margarita as onion―which she uses in her translation―is very appropriate because the word onion, according to Pliny, means an exceptionally large pearly which is single and undivided.
Shakespeare uses onion in this same sense in Hamlet:
And in the cup an onion shall he throw
Richer than that which four successive kings
In Denmark ‘s crown have worn (Hamlet v.ii.)
He measured the heights of some of the moon’s mountains―if the moon resembled the Earth, then again this lent weight to the idea that the earth was not unique in the heavens (remember that in Revolutions Copernicus had suggested that the earth was just another of the ‘wandering stars’). Galileo also refuted the idea that the earth was at the bottom of the cosmos. He conceived of ‘earthshine’ in contrast to moonshine, arguing that the moon was not lit by an internal luminosity as had been believed, but by light from the sun reflected by the earth. If all the planets did this, then, he said, perhaps they all orbit the sun.
Based on his observations of stars in his telescope he argued that the stars must be much further away than the Aristotelian system allowed for. The Milky Way, he said, ‘is nothing else but a mass of innumerable stars planted together in clusters’ (Galileo, 1880, 42).
But the really big news in the book, the one that most contributed to acentrism, was the discovery of four planets never seen before: the moons of Jupiter. Galileo called them the Medicean stars in honor of Grand Duke Cosimo of the House of Medici, but history now shows that two other people, Thomas Harriot in England, and Simon Marius in Germany, may well have seen these before Galileo. Marius may have built his own telescope and seen the moons of Jupiter in 1609. But he did not begin making his notes until the day after Galileo first described them in a letter. Galileo accused Marius of plagiarism, but Marius has left his mark here. In 1609 he called the moons Io, Europa, Ganymede and Callista, and these are the names by which they are still known today.
The Sidereal Messenger proved that the universe had no one single center, and since Jupiter was the center of the orbit of its four moons, perhaps there were many such centers. This struck a fatal blow to the Aristotelian system. It questioned the Great Chain of Being in regard to the hierarchy of continuity and the process of its implementation by emanation. It also questioned the logic of mastery and identity by suggesting that the unchangeable heavens are changeable, or, as we might say today, fluid. Might we go as far as to say that Galileo’s discoveries ‘queered’ the cosmos? New worlds gave additional incentive to believe not just that there are other worlds, but that there might also be other kinds of life on other worlds. John Milton, who visited Galileo while the latter was under house arrest, said in Paradise Lost
Of amplitude almost immense, with stars
Numerous, and every star perhaps a world
Of destined habitation
(Book VII. 620-3)
There is another way in which Galileo undermined the picture of the universe with an unmoving earth at its center. Remember that Aristotle had argued that the earth must be at rest since an object thrown straight into the air returns back to the hand that threw it. If the earth moved, then the object would get left behind. But something was in play that Aristotle did not recognize. Objects behave the same if they are travelling at a constant speed or if they are at rest. You can’t tell them apart. Think of being on a train and throwing a ball in the air. If the train is traveling at a constant speed, then the ball will fall back to you, just as it would do if the train were sitting in the station. Throwing an object in the air does not show whether the train, or the earth, is moving at constant speed or at rest.
On its own, then, this kind of Galilean relativity undermined the centrism of the old universe and the immobility of the Earth. But the logic of truth had not changed. What had changed was that the picture of the cosmos no longer conformed to the logic of truth. Remember we saw that truth had to be something ‘in-itself’, resistant to infinite regression, simple, and not dependent upon anything else, irreducible to anything else, and maintained by its own necessity. Into the gap left by the demise of the Aristotelian universe stepped something that still met these same requirements of the logic of truth, something that was its own mastery and identity: mechanics, or the laws of motion in nature.
The universe may have changed, but the logic of truth had not. The structure of truth was the same, but something new was needed that would now replace the old truths. Truth needed a new version of identity and mastery for its old logic. The new universe would not be Aristotelian, but the logic of its truth would remain so. In our next lectures we will look at how this developed in natural philosophy, and in future lectures we will look at how it played out in terms of inequality in the social and political world.
Galileo himself made dramatic contributions to mechanics with his three laws of motion.
- The law of uniform motion―distance travelled is proportional to the speed travelled and the time taken
- The law of acceleration of falling bodies. Gravity speeds up the acceleration of a falling object by a mathematical ratio: the distance travelled is the square of the time taken (e.g., 1 second = I meter, 2 seconds = 4 meters, 3 seconds = 9 meters). As such, a feather and a hammer, with no resistance, would fall at the same rate and hit the ground together if dropped together from the same height. Aristotle had said that the speed of the descent of an object would depend on its weight (we now know that the acceleration of free-falling body is 9.8 m/s/s or the rate of acceleration of gravity).
So, to sum up these two laws,
- Natural horizontal motion is motion at a constant velocity
- Natural vertical motion is falling at a constant acceleration
- The third law of motion concerns the motion of a projectile. The ball thrown into the air on the train travels in a parabola back to the hand. Things don’t drop straight down because horizontal motion is combined with vertical motion to form a curved path. (Bombs dropped from a plane don’t fall straight down. With the vertical speed of gravity and the horizontal speed of the plane they drop in a curved path.)
Why is this important? Because when a projectile lands, say, on a table, its vertical motion is stopped (gravity) but its horizontal motion is unaffected. Thus Galileo showed Aristotle to be wrong again. Aristotle believed the natural state of an object was to come to rest. Galileo argued that an object would keep moving, possibly eternally. With this he paves the way for Newton’s first law of motion later in the same century.
So far then ..
Galileo has undermined Aristotle’s concentric universe and his physics. Mechanics is replacing the emanationist universe of the Great Chain of Being. So, will mechanics now become the new shape of the logic of mastery?
But before we look at this question, it is worth looking at Galileo’s trial as an example of how hard the old shape of mastery as prepared to defend against itself against the new shapes.
In 1633 Galileo stood trial in Rome before the Inquisition for heresy in his all-too Copernican book Dialogue Concerning the Two Chief World Systems (1632). Despite previous warnings he was found guilty of heresy for supporting the idea that the earth moves and is not the center of the universe. The logic of mastery sentenced him to life imprisonment.
Galileo is reported to have dropped to his knees, holding the Bible, and offered his abjuration (his renunciation of these heresies) in Latin. He said, ‘I also swear and promise to adopt and preserve entirely all the penances which have been or may be by this Holy Office imposed on me.’ Yet, legend has it, as he rose to his feet he uttered under his breath ‘Eppur si muove’ (And yet, it moves). His life sentence was served as house arrest in Sienna where, despite what he had said to the Inquisition, he worked on another publication Dialogues Concerning Two New Sciences (1638). Finally, he moved to the hills above Florence. He died, blind, in 1642.
Only in 1992 did the Pope agree that Galileo should not have been condemned.
Copernicus, N. (2002) On The Revolutions of the Heavenly Spheres, Philadelphia, Running Press.
Galileo Galilei, (1880) The Sidereal Messenger, trans. Edward Stafford Carlos, London, Rivingtons.
Galileo Galilei, (1880) The Sidereal Messenger, trans. Edward Stafford Carlos, London, Rivingtons, p. 15.
Paradiso, II. 31-4.
The root of onion in Latin is unionem, a single pearl or onion.
Book 1.9., Copernicus, 2002, p. 19. |
Grading can be a time of dread or joy for both teachers and elementary students. However one feels about it, grading elementary students on their progress is an essential step in helping guide future instruction as well as a way to keep students and their parents informed of their achievements and areas of need. There are two methods of calculating elementary grades, each offering advantages and disadvantages. When used appropriately, both methods can help students grow as learners.
Traditional Method: Averaging
Within each subject area, add up the total amount that the assignments, tests or quizzes were worth within the grading period. This will give you the total amount of points possible for the grading period. The grading period usually occurs in quarters, trimesters or semesters. For example, a grading period for math may have five different grades that are worth 20 points, 10 points, 20 points, 15 points and 50 points each. These assignments add up to a total of 115 points for the math grading period.
Add up the total amount of points that the student earned for the assignments within the grading period. As an example, a student may have earned 11 points, 9 points, 20 points, 15 points and 48 points for the five math assignments during the grading period. These points add up to a total of 103 points earned.
Divide the total amount of points earned by the total points possible in the grading period to get the final grade. For example, 103 (total points earned) divided by 115 (total points possible) equals 0.895. This can then be rounded to .90, or a 90% in math for the grading period. This method can be used in all subject areas.
Standards Based Grading
Both grading systems can be used much more easily with good organizational skills. This can be accomplished by setting up a grade book in a program such as Microsoft Excel. Many schools also have grade book systems that work quite well for the averaging method.
Identify a specific skill that will be graded that relates to state standards. In standards based grading, there is not just one grade for each subject, but rather a grade for every skill that is learned within that subject. For example, instead of awarding one grade for math using the averaging method, students can be given three separate grades on multiplying large numbers, long division and addition.
Analyze the grades that were awarded during the grading period for each skill. Grades will not be given in points but rather with the letters E, M, A and FFB. These letters relate to how well a student has mastered the specific skills. E = Exceeds, M = Meets, A = Approaches, and FFB = Falls Far Below. For example, a student may receive five grades for long division: FFB, A, A, M and M.
Identify the last two grades that were given in each specific skill. Based on these last grades, you can make a decision on what grade the student deserves for each skill. If the grades comprise FFB, A, A, M and M, the student deserves an M for long division. The student started off struggling with the skill but demonstrated growth and mastery in the skill by the end of the grading period.
- "How to Grade for Learning"; Ken O'Connor; 2009
- Both grading systems can be used much more easily with good organizational skills. This can be accomplished by setting up a grade book in a program such as Microsoft Excel. Many schools also have grade book systems that work quite well for the averaging method.
About the Author
Stephen Hart has been writing for eHow since 2011. He received his Bachelor of Arts in elementary education from Washington State University in 2009 and is currently pursuing his special education certification. |
In a move worthy of a sci-fi blockbuster, NASA's Double Asteroid Redirection Test (DART) successfully defeated a perilous asteroid hurtling towards Earth. After 10 months of flying around in space near our planet, DART finally initiated its first move against an asteroid in space.
The Johns Hopkins Applied Physics Laboratory (APL), DART's mission control center based in Laurel, Maryland, announced the successful impact between DART and the asteroid at 7:14EDT on Monday, September 26.
Unprecedented Success For NASA's Planetary Defense
DART comprises an integral part of NASA's planetary defense strategy, which has a primary mission of protecting the Earth from large, dangerous asteroids that could impact the planet.
"As NASA studies the cosmos and our home planet, we're also working to protect that home, and this international collaboration turned science fiction into science fact, demonstrating one way to protect Earth," said NASA Administrator Bill Nelson.
The asteroid in question was dubbed Dimorphos and measured 530 feet in diameter. Dimorphos is actually a moonlet orbiting a larger asteroid named Didymos. Neither asteroid posed a direct threat to Earth, but both were close enough to our planet to act as targets for viable testing of DART's capabilities. The DART mission confirmed that NASA could remotely collide a spacecraft with an asteroid using kinetic impact technology to deflect it.
Investigating the Success of DART
NASA's observation team is now investigating Dimorphos to confirm its altered orbital pattern. It's estimated that the impact shortened the orbit by 10 minutes, about 1% of its cycle. Precise measurement of the asteroid's deflection from its initial orbit was one of the prime purposes of the test.
The one-way mission to Dimorphos was observed by the spacecraft's Didymos Reconnaissance and Asteroid Camera for Optical navigation (DRACO), controlled with a remotely-guided navigation and control system. It uses NASA's Small-body Maneuvering Autonomous Real-Time Navigation (SMART Nav) algorithms to identify the target asteroid, differentiate it from its orbited body, and precisely target it.
DART's one-way collision spacecraft traveled approximately 56,000 miles into space to reach Dimorphos and crashed into the asteroid at 14,000 mph, slightly reducing the orbital speed. And as a bonus, one-of-a-kind close-up photographs of the surface of Dimorphos, taken with close detail, were relayed back to Johns Hopkins mission control.
Dimorphos is Part of a Larger Asteroid System
The asteroid pair of Dimorphos and Diymos is about 7 million miles away from Earth, viewed through a worldwide network of telescopes monitoring their progress. This observation network determined that the pair is not alone but rather part of a larger asteroid system. Over the coming weeks, the pair and their greater system will be observed to determine if the changed orbit is permanent and to determine whether there is any other change in the asteroid's behavior after the DART impact.
"Beyond the truly exciting success of the technology demonstration, capabilities based on DART could one day be used to change the course of an asteroid to protect our planet and preserve life on Earth as we know it," said APL Director Ralph Semmel. Planetary defense unifies the globe and fosters a sense of cooperation among each county's space programs. Currently, NASA's Planetary Defense Coordination Office is managed by Johns Hopkins APL, also serving as mission control for all DART-related projects. The European Space Agency's Hera Project plans to conduct a follow-up observation of the asteroid pair in four years to determine the long-term success of DART's mission.
Get the latest from The Futurist every afternoon in your inbox
Check out some of the newest posts from The Futurist |
Blight facts for kids
Blight refers to way plants wither when infected. It is a rapid and complete chlorosis, browning, then death of plant tissues such as leaves, branches, twigs, or flowers. Various diseases which cause the symptom are known as blights. Several notable examples are:
- Late blight of potato, caused by the water mould Phytophthora infestans, the disease which led to the Irish potato famine
- Southern corn leaf blight, caused by the fungus Cochliobolus heterostrophus czused a severe loss of corn in the United States in 1970.
- Chestnut blight, caused by the fungus Cryphonectria parasitica has nearly completely eradicated mature American chestnuts in North America.
- Bacterial leaf blight of rice is caused by the bacterium Xanthomonas oryzae.
- Early blight of potato and tomato, caused by species of the common fungal genus Alternaria
- Leaf blight of grasses. On leaf tissue, symptoms of blight are the initial appearance of lesions which rapidly engulf surrounding tissue.
Images for kids
Blight Facts for Kids. Kiddle Encyclopedia. |
What denotes learned and shared beliefs and behaviours?
background of a person in terms of family or nationality = Descent
the fact or state of belonging to a social group that has a common national or cultural tradition = Ethnicity
the attitudes and behaviour characteristic of a particular social group = Culture
Create a FREE account and get: |
What does great perspective mean?
What does great perspective mean?
In drawing, perspective gives your drawing the appearance of depth or distance. If we say someone “has perspective,” we mean she has a sensible outlook on life.
What does it mean to give perspective?
To “put something in perspective” means to compare with something similar to give a clearer, more accurate idea.
What are the three types of perspective view?
But there are actually three types of perspective you should know about. Those are atmospheric, color, and linear. Most great madshots will show all three of these types of perspective.
What kind of word is perspective?
noun. a way of regarding situations, facts, etc, and judging their relative importance. the proper or accurate point of view or the ability to see it; objectivitytry to get some perspective on your troubles.
What is an example of perspective?
Perspective is the way that one looks at something. It is also an art technique that changes the distance or depth of an object on paper. An example of perspective is farmer’s opinion about a lack of rain. An example of perspective is a painting where the railroad tracks appear to be curving into the distance.
Which type of perspective is the most realistic?
Terms in this set (5) A perspective drawing offers the most realistic three-dimensional view of all the pictorial methods, because it portrays the object in a manner that is most similar to how the human eye perceives the visual world. A horizontal line represents the horizon.
What is an example of a point of view?
The point of view in a story refers to the position of the narrator in relation to the story. For example, if the narrator is a participant in the story, it is more likely that the point of view would be first person, as the narrator is witnessing and interacting with the events and other characters firsthand.
Which is the best definition of point of view in fiction?
Point of view (POV) is what the character or narrator telling the story can see (his or her perspective). Many stories have the protagonist telling the story, while in others, the narrator may be another character or an outside viewer, a narrator who is not in the story at all.
What words are related to perspective?
Synonyms of perspective
- eye view,
- vantage point,
When to use perspective in point of view?
You can use perspective in all points of view to help define your narrator’s attitude and personality. The character’s perspective affects how he feels about certain experiences or other characters. In the landscape of your novel (as in real life), everyone’s perspective should be different.
Why is it important to have a good perspective on life?
Having a good perspective on life gives you an advantage. First of all, you are a lot more open to seeing from other people’s perspective, thus making it easy for you to create meaningful relationships. It also gives you a lot more reasons to be grateful and happy.
What are the different types of perspective drawings?
Of the many types of perspective drawings, the most common categorizations of artificial perspective are one-, two- and three-point. The names of these categories refer to the number of vanishing points in the perspective drawing.
What do you mean by different perspectives on life?
Life perspective is the way people see life, including the way they approach life and all there is in their personal experience. In this life, few things are absolutely right or wrong. What we usually have are two different perspectives on one thing.
What does it mean to have a perspective on life?
Successful people have a certain perspective on life. It is this perspective or point of view that helps them reach the pinnacle of success. There are various moments to learn in your daily life. Everything that happens around you is an opportunity. Your perspective on life will only teach you how to maximize the return from this opportunity.
Which is the best definition of perspective taking?
Perspective-taking is about being able to understand a situation from the point of view of another person. The nice thing about this skill is in how it allows us to better explore a situation that happened in the past — or it can support you in making an upcoming decision.
What is the difference between perspective and point of view?
Perspective can express a different approach to a well-known event or issue, and provides an opportunity for readers to see things in a new way. Perspective can be strengthened by the author’s choice for the narrator’s point of view, but the two are separate literary concepts.
Is the point of view of one person right or wrong?
A perspective is not right or wrong by default. It just is what it is: the point of view of a single person based on their life experiences and values, among other things. We each have one; sometimes we share it with others, and sometimes we do not. |
Circulating catecholamines, epinephrine and norepinephrine, originate from two sources. Epinephrine is released by the adrenal medulla upon activation of preganglionic sympathetic nerves innervating this tissue. This activation occurs during times of stress (e.g., exercise, heart failure, hemorrhage, emotional stress or excitement, pain). Norepinephrine is also released by the adrenal medulla (approximately 20% of its total catecholamine release is norepinephrine); however, the primary source of circulating norepinephrine is spillover from sympathetic nerves innervating blood vessels. Normally, most of the norepinephrine released by sympathetic nerves is taken back up by the nerves (a small fraction is also taken up by extra-neuronal tissues) where it is metabolized. A small amount of norepinephrine that is recycled and metabolized diffuses into the blood and circulates throughout the body. At times of high sympathetic nerve activation, the amount of norepinephrine entering the blood increases dramatically.
There is also a specific adrenal medullary disorder (chromaffin cell tumor; pheochromocytoma) that causes very high circulating levels of catecholamines. This can lead to a hypertensive crisis.
Circulating Epinephrine Causes:
- Increased heart rate and inotropy (β1-adrenoceptor mediated).
- Vasoconstriction of systemic arteries and veins via postjunctional α 1 and α 2 adrenoceptors.
- Vasodilation in muscle and liver vasculatures at low concentrations (β2-adrenoceptor); vasoconstriction at high concentrations (α-adrenoceptor mediated).
- The overall cardiovascular response to low-to-moderate circulating concentrations of epinephrine is increased cardiac output and a redistribution of the cardiac output to muscular and hepatic circulations, with only a small change in mean arterial pressure. Although cardiac output is increased, arterial pressure does not change much because the systemic vascular resistance falls due to the activation of vascular β2-adrenoceptors. At high plasma concentrations, epinephrine increases arterial pressure (not shown in figure) because of binding to α-adrenoceptors on blood vessels, which offsets the β2-adrenoceptor mediated vasodilation.
The effects of low, medium and high plasma concentrations of epinephrine on systemic vascular resistance are shown in the bar graph. At low plasma levels, epinephrine preferentially binds to high affinity vascular β2-adrenoceptors and causes vasodilation, which results in a fall in systemic vascular resistance. As the concentration of epinephrine increases, lower affinity α-adrenoceptors begin to bind epinephrine, which partially offsets the β2-adrenoceptor-mediated vasodilatory effects of epinephrine. At high circulating concentrations, more α-adrenoceptors are bound to epinephrine and the balance of vasodilatory and vasoconstrictor actions of epinephrine shifts to net vasoconstriction (increased systemic vascular resistance).
Circulating Norepinephrine Causes:
- Increased heart rate (although only transient) and increased inotropy (β1-adrenoceptor mediated) are the direct effects norepinephrine on the heart.
- Vasoconstriction occurs in most systemic arteries and veins (postjunctional α 1 and α 2 adrenoceptors).
- The overall cardiovascular response is increased cardiac output and systemic vascular resistance, which results in an elevation in arterial blood pressure. Heart rate, although initially directly stimulated by norepinephrine, decreases because of baroreflex activation, causing vagal-mediated slowing of the heart in response to the elevation in arterial pressure.
Pharmacologic Blocking of the Actions of Circulating Catecholamines
Because catecholamines act on the heart and blood vessels through alpha and beta adrenoceptors, the cardiovascular actions of catecholamines can be blocked by treatment with alpha-blockers and beta-blockers. Blocking either the alpha or beta adrenoceptor alone alters the response of the catecholamine because the other adrenoceptor will still bind to the catecholamine. For example, if a moderate dose of epinephrine is administered in the presence of alpha-adrenoceptor blockade, vascular β2-adrenoceptor activation unopposed by vascular α-adrenoceptor activation will cause a large hypotensive response because of systemic vasodilation despite the cardiac stimulation that occurs through β1-adrenoceptor activation. |
Tensile Testing of Steel
Tensile Testing of Steel
Sample of steel is subjected to a wide variety of mechanical tests to measure their strength, elastic constants, and other material properties as well as their performance under a variety of actual use conditions and environments. Tensile test is one of them. Other tests are hardness test, impact test, fatigue test, and fracture test. These mechanical tests are used to measure how a sample of steel withstands an applied mechanical force. The results of such tests are used for two primary purposes namely (i) engineering design (e.g. failure theories based on strength, or deflections based on elastic constants and component geometry), and (ii) quality control either by the producer of steel to verify the process or by the end user to confirm the material specifications.
Uniaxial tensile test is known as a basic and universal engineering test to achieve material parameters such as ultimate tensile strength (UTS), yield strength (YS), % elongation, % area of reduction and young’s modulus. Tensile testing is done for many reasons. The results of tensile tests are used in selecting materials for engineering applications. Tensile properties are often included in material specifications to ensure quality. Tensile properties are also normally measured during development of new materials and processes, so that different materials and processes can be compared. Also, tensile properties are generally used to predict the behaviour of a material under forms of loading other than uniaxial tension.
Safely withstanding the expected maximum load without permanent deformation (or to stay within the specified deflection) is a basic requirement for a steel product. The ‘resistance’ against the load is a function of the cross-section and the mechanical properties (or in other words the ‘strength’) of the steel material. Tensile testing is done to determine the mechanical properties of the yield strength, tensile strength, and elongation.
It is known from basic principles that a tensile stress tends to pull a member apart, a compressive stress tends to crush or collapse a body, a shear stress tends to cleave a structural member, and a bending stress tends to deflect a member. The allowable torsional stress which the steel material can tolerate is measured by shear strength, and the allowable bending stress which the steel material can tolerate is based on the tensile properties. This is because bending puts the outer fibers of steel member in tension.
Elastic and plastic deformation
A straight piece of steel wire or strip, rigidly held at one end, bent by a small load to a few degrees, normally ‘spring back’ to its original shape when the load is released. By placing a double load at the end of the steel sample, the rate of deflection is then twice as high but the sample still returns to its original shape when the load is taken off. In other words, the sample is loaded within its ‘elastic’ range.
After increasing the load and the deflection to a certain limit, the sample no longer returns to its original shape upon the removal of the load. At that load, the sample remains ‘permanently’ deformed since the stresses in the steel material exceeded the yield strength limit. Similar occurrences can be observed with springs. The linear relation between load and deflection is utilized in the ‘spring balance’ scale but the load is always kept safely within the elastic range of the spring. If a spring is stretched over its elastic range (over yield limit), then it will not spring back to its original shape. This permanent deformation is known as plastic deformation.
The response of steel material response to the three major forms of stresses namely (i) tension, (ii) compression, and (iii) shear, can be measured on a universal testing machine. This machine can pull axially on a test sample (tensile load) or push on a test sample to measure response to compression loading. Shear tests are run by loading a pin in a special fixture.
Tensile test sample (Fig 1) has enlarged ends or shoulders for gripping. The ‘dog-bone’ shape ensures that the sample breaks in the centre and not in the grip area. The important part of the sample is the gauge section. The cross-sectional area of the gauge section is reduced relative to that of the remainder of the sample so that deformation and failure is localized in this region. The gauge length is the region over which measurements are made and is centered within the reduced section. The distances between the ends of the gauge section and the shoulders are to be large enough so that the larger ends do not constrain deformation within the gauge section, and the gauge length is to be great relative to its diameter. Otherwise, the stress state is more complex than simple tension. The sample size and shape is to conform to a national or international standard.
Fig 1 Typical tensile test sample
There are several ways of gripping the sample. These are (i) the end may be screwed into a threaded grip, (ii) it may be pinned, (iii) butt ends may be used, or (iv) the grip section may be held between wedges. There can be other methods also. The most important aspect in the selection of a gripping method is to ensure that the sample can be held at the maximum load without slippage or failure in the grip section. Bending is required to be minimized.
Tensile testing is normally carried out in the universal testing machines (UTM) (Fig 2). These machines test materials in tension, compression, or bending. Their primary function is to create the stress-strain diagram. Universal testing machines are either electro-mechanical or hydraulic. The principal difference is the method by which the load is applied. Electro-mechanical testing machines are based on a variable-speed electric motor, a gear reduction system, and one, two, or four screws which move the crosshead up or down. This motion loads the sample in tension or compression. Crosshead speeds can be changed by changing the speed of the motor. A micro-processor-based closed-loop servo system can be implemented to accurately control the speed of the crosshead. Hydraulic testing machines are based on either a single or dual-acting piston which moves the crosshead up or down. However, most static hydraulic testing machines have a single acting piston or ram. In a manually operated machine, the operator adjusts the orifice of a pressure-compensated needle valve to control the rate of loading. In a closed-loop hydraulic servo system, the needle valve is replaced by an electrically operated servo valve for precise control.
Fig 2 Universal testing machines
Universal testing machine applies a tensile load when one end of the test sample is attached to the movable crosshead with the other end fixed to a stationary member. The crosshead is then driven in such a manner as to pull the sample apart. Compressive loading is achieved by driving the crosshead against short stubby cylinders placed on the stationary machine plate. Attachments are used to hold various shaped specimens, but tensile sample is usually made in a ‘dog-bone’ shape.
The sample is placed in the testing machine and a force F is applied. A mechanical or electrical device – strain gauge or extensometer is used to measure the amount that the sample stretches between the gauge marks when the force is applied until the specimen fails. The stretch, both elastic (recoverable) and plastic (permanent), is converted into strain by division of the change in length (extension or elongation) by the original length. Using the original cross-sectional area of the sample, the load F is converted into stress, and an engineering stress-strain diagram is obtained.
Engineering stress, or nominal stress, S, is defined as S = F/A0 where F is the tensile force and A0 is the initial cross-sectional area of the gauge section. Engineering strain, or nominal strain, e, is defined as e = (L-L0)/L0 where L0 is the initial gauge length (original distance between the gauge marks), and L is the distance between the gauge marks after force F is applied. When force-elongation data are converted to engineering stress and strain, a stress-strain diagram that is identical in shape to the force-elongation diagram (Fig 3) can be plotted. The advantage of dealing with stress versus strain rather than load versus elongation is that the stress-strain curve is essentially independent of the dimensions of the sample.
Fig 3 Typical force-elongation diagram and stress-strain diagram
During elastic deformation, the engineering stress-strain relationship follows the Hook’s Law and the slope of the curve indicates the Young’s modulus (E). E = S/e
Young’s modulus is of importance where deflection of materials is critical for the required engineering applications. This is e.g. deflection in structural beams is considered to be crucial for the design in engineering components or structures such as bridges, buildings, ships, etc. The applications of tennis racket and golf club also require specific values of spring constants or Young’s modulus values.
Yield strength and yield point
By considering the stress-strain diagram beyond the elastic portion, if the tensile loading continues, yielding occurs at the beginning of plastic deformation. The yield stress, Ys, can be obtained by dividing the load at yielding (Fy) by the original cross-sectional area (A0) of the sample (Ys=Fy/A0).
The yield point can be observed directly from the load-elongation diagram of the BCC metals such as iron and steel especially low carbon steels. The yield point elongation phenomenon shows the upper yield point followed by a sudden reduction in the stress or load till reaching the lower yield point. At the yield point elongation, the sample continues to elongate without a significant change in the stress level. Load increment is then followed with increasing strain. This yield point occurrence is associated with a small amount of interstitial or substitutional atoms. This is for example in the case of low-carbon steels, which have small atoms of carbon and nitrogen present as impurities. When the dislocations are pinned by these solute atoms, the stress is raised in order to overcome the breakaway stress required for the pulling of dislocation line from the solute atoms. This dislocation pinning is related to the upper yield point. If the dislocation line is free from the solute atoms, the stress required to move the dislocations then suddenly drops, which is associated with the lower yield point. Furthermore, it has been found that the degree of the yield point effect is affected by the amounts of the solute atoms and is also influenced by the interaction energy between the solute atoms and the dislocations.
Material having a FCC crystal structure does not show the definite yield point in comparison to those of the BCC structure materials, but shows a smooth engineering stress-strain diagram. The yield strength therefore has to be calculated from the load at 0.2 % strain divided by the original cross-sectional area [Y(0.2 %) = F(0.2 %)/A0
The determination of the yield strength at 0.2 % offset or 0.2 % strain can be carried out by drawing a straight line parallel to the slope of the stress-strain curve in the linear section, having an intersection on the x-axis at a strain equal to 0.002. An interception between the 0.2 % offset line and the stress-strain diagram represents the yield strength at 0.2 % offset or 0.2 % strain.
The yield strength of soft materials exhibiting no linear portion to their stress-strain diagram such as soft gray cast iron can be defined as the stress at the corresponding total strain. The yield strength, which indicates the onset of plastic deformation, is considered to be vital for engineering structural or component designs where safety factors are normally used. Safety factors are based on several considerations which include (i) the accuracy of the applied loads used in the structural or components, (ii) estimation of deterioration, and (iii) the consequences of failed structures (loss of life, financial, economic loss, etc.) Generally, buildings require a safety factor of 2, which is rather low since the load calculation has been well understood. Automobiles has safety factor of 2 while pressure vessels utilize safety factors of 3 to 4.
Ultimate tensile strength
Beyond yielding, continuous loading leads to an increase in the stress required to permanently deform the sample as shown in the engineering stress-strain diagram. At this stage, the sample is strain hardened or work hardened. The degree of strain hardening depends on the nature of the deformed materials, crystal structure and chemical composition, which affects the dislocation motion. FCC structure materials having a high number of operating slip systems can easily slip and create a high density of dislocations. Tangling of these dislocations requires higher stress to uniformly and plastically deform the sample, hence resulting in strain hardening.
If the load is continuously applied, the stress-strain diagram reaches the maximum point, which is the ultimate tensile strength (UTS). At this point, the sample can withstand the highest stress before necking takes place. This can be observed by a local reduction in the cross-sectional area of the sample generally observed in the centre of the gauge length. (UTS = Fmax/Ao)
After necking, plastic deformation is not uniform and the stress decreases accordingly until fracture. The fracture strength (FS) can be calculated from the load at fracture divided by the original cross-sectional area. (FS=Ffracture/A0)
Tensile ductility of the sample is represented as % elongation or % reduction in area. % elongation = [(L-L0)/L0]*100 and % reduction are = [(A0-A)/A]*100 where A0 is the original cross-sectional area of the sample and A is the cross-sectional area at fracture.
The fracture strain of the specimen can be obtained by drawing a straight line starting at the fracture point of the stress-strain diagram parallel to the slope in the linear relation. The interception of the parallel line at the x axis indicates the fracture strain of the sample being tested.
Work hardening exponent
The material behaviour beyond the elastic region where stress-strain relationship is no longer linear (uniform plastic deformation) can be shown as a power law expression (Ts = K*e to the power n) where Ts is the true stress, e is the true strain, n is the strain-hardening exponent, and K is the strength coefficient. The strain-hardening exponent values, n, of most metals range between 0.1-0.5, which can be estimated from a slope of a log true stress-log true strain plot up to the maximum load.(log Ts =n log e + log K) or Y = m X + C
While n is the slope (m) and the K value indicates the value of the true stress at the true strain equal to unity. High value of the strain-hardening exponent indicates an ability of a metal to be readily plastically deformed under applied stresses. This is also corresponding with a large area under the stress-strain diagram up to the maximum load. This power law expression has been modified variably according to materials of interest especially for steels and stainless steels.
Modulus of Resilience
Apart from tensile parameters mentioned previously, analysis of the area under the stress-strain diagram can give informative material behaviour and properties. By considering the area under the stress-strain diagram in the elastic region, this area represents the stored elastic energy or resilience. The latter is the ability of the materials to store elastic energy which is measured as a modulus of resilience (UR).
The significance of this parameter is considered by looking at the application of mechanical springs which requires high yield stress and low Young’s modulus. For example, high carbon spring steel has the modulus of resilience of 23 kg/sq cm while that of medium carbon steel is only 2.4 kg/sq cm.
Tensile toughness (UT) can be considered as the area under the entire stress-strain diagram which indicates the ability of the material to absorb energy in the plastic region. In other words, tensile toughness is the ability of the material to withstand the external applied forces without experiencing failure. Engineering applications which requires high tensile toughness is for example gear, chains and crane hooks, etc. |
In English lessons we explore the importance of Language and develop analytical skills. We develop a love of reading through the exploration of texts throughout time. In an ever-changing society we explore the changes in language and how it can be used to articulate our thoughts and beliefs in an appropriate way for the intended audience. Through doing this we develop and build our oracy skills and use this to explore how we communicate in different settings and contexts. Our study of both fiction and non-fiction texts exposes our students to a breadth of literature; both canonical and modern.
We ensure students have the opportunity to study a wide variety of texts and genres to develop reading, writing and speaking and listening skills. Throughout our curriculum we aim to develop a love of reading and learning and closely mirror the skills needed for their GCSE English Language and Literature exam.
Students are encouraged to, and should, read a book of their choice twenty minutes a day at home. Suggested reading lists can be found on the school homepage under ‘The Day and Library Noticeboard.’
We encourage students to use online resources to help them revise, practise their skills and develop their knowledge. ‘BBC Bitesize’ has a range of activities and resources to support learning at KS3.
Years 10 & 11 – English Language and English Literature
Our GCSE English curriculum is a rigorous and exciting course that enables students to excel and develop skills and knowledge attained at Key Stage Three.
Students will need to purchase a copy of Macbeth, The Sign of the Four and An Inspector Calls for revision and annotations to guide their learning revision. Texts must be brought into school when the relevant text is being studied.
We suggest students continue reading texts beyond those studied for the exam including texts similar to the genres and time periods we study. To support revision students are also encouraged to purchase revision guides for each unit we study which will guide their independent revision at home.
There are several online resources that can be used alongside revision and learning, including: Seneca, GCSE Pod, BBC Bitesize and Sparknotes
GCSE English Language enables students of all abilities to develop the skills they need to read, understand and analyse a wide range of different texts covering the 19th, 20th and 21st century time periods as well as to write clearly, coherently and accurately using a range of vocabulary and sentence structures.
Each paper has a distinct identity to better support high quality provision and engaging teaching and learning.
- Paper 1 – Explorations in Creative Reading and Writing – looks at how writers use narrative and descriptive techniques to engage the interest of readers
- Paper 2 – Writers’ Viewpoints and Perspectives – looks at how different writers present a similar topic over time
GCSE English Literature encourages students to develop knowledge and skills in reading, writing and critical thinking. Through literature, students have a chance to develop culturally and acquire knowledge of the best that has been thought and written. Studying GCSE English Literature should encourage students to read widely for pleasure, and as a preparation for studying literature at a higher level. The texts we study for GCSE English Literature are:
- The Sign of the Four
- An Inspector Calls
- Power and Conflict Poetry
A Level English Literature
Our A Level English Literature curriculum is empowering, rigorous, demanding and inclusive.
English Literature enables students to:
- read widely and independently set texts and others that they have selected for themselves
- engage critically and creatively with a substantial body of texts and ways of responding to them
- develop and effectively apply their knowledge of literary analysis and evaluation
- explore the contexts of the texts they are reading and others’ interpretations of them
- undertake independent and sustained studies to deepen their appreciation and understanding of English literature, including its changing traditions
We follow the Edexcel A Level specification. This assesses the students with 2 exam papers including a course work assessed essay.
- 1 Shakespeare play (Othello)
- 1 other play (A Streetcar Named Desire)
Leads to final exam Component 1 Drama – 2 hours 15 mins – 2 essays / ‘open book’ (this means you are given a clean copy of the text in the exam)
- Hard Times (pre 1900)
- Atonement (chosen theme: Childhood)
Leads to final exam Component 2 Prose – 1 hour 15 mins exam – 1 essay
- Coursework: 1 extended comparative essay on two texts
- Poetry: Poems of the Decade – An Anthology of Poetry comparison with an unseen poem
- Medieval Poetic Drama: The Wife of Bath – Geoffrey Chaucer
Leads to a final exam Component 3 Poetry – 2 hours 15 mins – 2 essays / ‘open book’
Component 4 – Course Work Essay – 1 extended comparative essay (2500-3000 words) on two texts (teacher/student choice) linked by theme, movement, author or period
Further Information: Edexcel A Level English Literature |
Virtual organization is a term used to describe a collection of people or organizations who are sharing resources without physically moving into the same space. Typically, virtual is used to describe computer-generated environments, where people with a common purpose or issue can meet, unrestricted by geography. This type of organization has grown substantially in the past 10 years as the cost of technology decreases, providing opportunities to remove these barriers at a lower cost.
All virtual organizations have the same requirement: the ability to communicate directly with a large group of people. In virtual organizations, there is often no single leader, but a collective group of people overseeing the operations of the organization. This structure is most common in organizations united by a common goal.
In order to support an Internet-based collaborative environment, there are specific hardware and software requirements. Typically, a powerful web server and large hard disk capacity is required to provide the processing power and memory required for the virtual software program. These programs use the Internet to provide access to shared folders, provide communication tools, and manage document versioning. The resources required varies, depending on the size of the group and the type of documents being used.
Virtual computing removes the need for this type of expensive infrastructure. Instead, the processing power of a large number of smaller computers connected via a grid is used. The decrease in cost for personal computers while increasing processing power and speed has made this concept much more prevalent.
In research-focused virtual organizations, there are different requirements for accessing profiles and data sharing. These types of organizations typically require more computer processing power and data storage. Researchers need to access large amounts of data, reports from colleagues, and lengthy dialog and discussion. Most research institutes set up a virtual organization for their researchers to encourage collaboration and teamwork.
A virtual organization can be made up of multiple workstations within a specific area, such as a corporation or educational institute. Alternatively, they can be located all over the world. The excess processing power is channeled from the computers on the network to a larger supercomputer.
These projects are typically focused on the types of supercomputers found in universities or research institutions. The computers are processing massive, complex calculations. The additional processing speed provided through the virtual organizations keeps costs down, while providing the opportunity to expand the power at any time. |
Physiology: The Circulatory System Part 3 (Revisited)Fitness Gear & Equipment
The heart nodes - These special bundles of unique tissue are simply astounding. The first is embedded in the wall of the right atrium, and is called sinoatrial node (S.A. node) and is the "pacemaker" of the heart. (Artificial pacemakers derive their name from it.) The other bundle is in the lower part of the septum, and is the atrioventricular node (A.V. node). The bundle of His is also in this area, and is called the coordinator.
Heartbeat - The heartbeat originates in the S.A node, and immediately the entire auricle contracts. Then the A.V node picks up the message and relays the signal to the muscle fibers of the ventricle, which contracts. Heart block is a condition which occurs when disease of the bundle of His interrupts communication between the auricle and the ventricles. A proper balance of calcium, sodium, and potassium is needed for the heart to function properly. If you eat only good food, you should have the proper nutritional balance.
Heart rate - The heart rate is controlled by the medulla in the brain and sensory nerve impulses to the heart. It is speeded up by emotional reaction, fever, or physical exertion. It is decreased by increased blood pressure, a lack of oxygen, or excess carbon dioxide.
Heart valves - The openings into the aorta and the pulmonary artery are fitted with flaps, called valves. There are similar valves at the openings between the auricles and ventricles. The valve between the right auricle and right ventricle is called the tricuspid valve. The valve between the left auricle and left ventricle is called the mitral, or bicuspid valve. After the blood is squeezed into the ventricles, these valves close, and those over the artery outlets open. This keeps the blood from flowing back when the heart relaxes.
Blood vessels - These are the tubes which carry blood throughout the body. Those that carry blood away from the heart are the arteries; those that take it to the heart are the veins. (One oddity in following this rule is the pulmonary circulation, which sends blood to the lungs and back to the heart. Arteries carry deoxygenated blood away from the heart and veins carry fresh, oxygenated blood to the heart. Elsewhere in the body, only arteries carry the fresh blood.)
Coronary arteries - These are especially important arteries because they are the ones supplying food and oxygen to the heart muscle itself. If these arteries become narrowed, or if a blood clot blocks part of them, the heart is not supplied with blood and serious trouble may occur. We call this coronay heart disease; it is the cause of most of the deaths from heart trouble after middle age.
Brain arteries - If a blockage occurs in a brain artery, a stroke can occur.
Capillaries - Blood travels out from the heart through the aorta, and then through smaller arteries, and finally through very small arteries (arterioles) into the capillaries (the smallest blood vessels). From there, the blood travels into the smallest veins (venules), then into veins, and finally to superior and inferior venae cava and back into the heart.
Special systems - Pulmonary circulation: takes blood from the heart to the lungs and back again - so the blood can pick up oxygen. Another important one is the portal system. All the veins from the stomach, intestines, spleen, and pancreas empty into the portal vein, which leads to the liver - so the blood can pick up food. Blood leaves the liver through the hepatic vein and goes to the heart.
Lymphatic system - Your body is filled with lymphatic vessels. They do not carry blood; they carry excess fluids and waste and empty into the thoracic duct and right lymphatic duct, from when it goes into veins at the back of the neck. Body muscles keep the lymph following; valves in the vessels keep it from flowing backward.
Lymph nodes - There are lymph nodes at several places in your body; these filter out harmful substances such as bacteria and cancer cells. They also manufacture lymphocytes (one type of white blood cell). There are six places where these nodes are found: under the arms, on the right and left side of the groin, and the right and left side of the neck. If they become infected, the disease is called adenitis.
Spleen - The spleen lies directly below the diaphragm, above the left kidney, and behind the stomach. It destroys old red blood cells, makes one type of white blood cell, and filters toxins from the blood. It also produces antibodies, which give us immunity to certain diseases. It stores iron, bile pigments, and antibodies. It becomes enlarged in anemia, malaria, and leukemia.
Tonsils - The three tonsils in the pharyngeal wall at the back of your throat strain out toxins and make lymphocytes. It is significant that the tonsils guard the entrance to the gastro-intestinal tract and the appendix guards the outlet of the small intestine. |
Study: Music, Manipulatives Are Fun, But Basics Best for Struggling Math Students
First grade teachers facing a class full of students struggling with math were more likely to turn to music, movement, and manipulative toys to get their frustrated kids engaged, finds a new study in the journal Educational Evaluation and Policy Analysis. Yet researchers found these techniques did not help—and in some cases hindered—learning for the students having the most difficulty.
Pennsylvania State University researchers Paul L. Morgan and Steve Maczuga and George Farkas of the University of California, Irvine analyzed the use of different types of instruction by 1st grade mathematics teachers, including teacher-directed instruction, such as explicit explanations and practice drills; student-centered, such as small-group projects and open problem-solving; and strategies intended to ground math in real life, such as manipulative toys, calculators, music, and movement activities.
The researchers tracked the use of different strategies by 1st-grade teachers with both regular students and those with math difficulties, defined as students who had performed in the bottom 15 percent of their kindergarten math achievement tests. Educators taught an array of math skills, from ordering and sorting objects into groups, writing numbers up to 100, naming shapes, copying patterns, and single-digit addition and subtraction, among others. The researchers found that students of average math ability learned equally well using teacher-directed or student-centered instructional approaches, but struggling students improved only when teachers used directed instruction, and particularly extra practice with basic concepts.
"In general education there's been more focus on approaches that are student-centered: peers and small groups, cooperative learning activities," Morgan said. "What can happen with that for kids with learning difficulties is there are barriers that can interfere with their ability to take advantage of those learning activities. Children with learning disabilities tend to benefit from instruction that is explicit and teacher directed, guided and modeled and also has lots of opportunities for practice."
Moreover, neither struggling nor regularly achieving math students improved when using manipulatives, calculators, music, or movement strategies; these activities actually decreased student learning in some cases. Ironically, a regession analysis of the classes found teachers became more likely to use these strategies in classes with higher concentrations of students with math difficulties.
"If I was going to offer a conjecture, what might be happening, as the teacher gets more students in the classroom that are struggling, they might be using the manipulatives or music to work around the students' difficulties and make the math seem more real ... but our results don't indicate that those practices will lead to more student achievement gains," Morgan said.
Older students may still benefit from manipulatives and other math activities, and the findings don't argue for filling students' days with "drill and kill," Morgan said, but early elementary school—when students are learning basic math concepts for the first time and when few students have been officially identified as having math learning disabilities—can create a perfect environment for students to founder in math.
"I don't want kids to be bored, I don't want them to look at math as drudgery, I don't want my kids to go to school and do worksheets all day. I want them to be engaged by what they are being taught," Morgan said, "but I think sometimes we touch on concepts too briefly; we only give kids two or three opportunities to practice it."
Video Source: American Educational Research Association |
THE TERM Middle East came to modern use after World War II, and was applied to the lands around the eastern end of the MEDITERRANEAN SEA including TURKEY and GREECE, together with IRAN and, more recently, the greater part of North Africa. The old Middle East began at the river valleys of the Tigris and the Euphrates rivers or at the western borders of Iran and extended to Burma (MYANMAR) and Ceylon (SRI LANKA). Some geographers today say the Middle East region stretches 6,000 mi (9,656 km) eastward from the dry Atlantic shores of MAURITANIA to the high mountain core of AFGHANISTAN. Other geographers begin the Middle East with EGYPT. It includes numerous separate political states, most of which were created by colonial government cartographers in the 19th and 20th centuries. A good deal of the Middle East is too dry or rugged to sustain human life, and only 5 to 10 percent of the entire region is cultivated. As a result, a stark contrast exists between core areas of dense human settlement where water is plentiful, and the empty wastes of surrounding deserts and mountains.
Four regions can be identified in this vast, diverse, and distinct area: the Northern Highlands, a 3,000-mi- (4,828-km-) long zone of plateaus and mountains in Turkey, IRAN, and AFGHANISTAN, stretching from the Mediterranean Sea to Central Asia; the Arabian Peninsula, a million-square-mi- (2.6-million-square-km-) desert quadrilateral jutting southward into the INDIAN OCEAN and flanked on either side by the Persian (or Arabian) Gulf and the RED SEA; the Central Middle East, the rich valleys of the NILE in Egypt and of the Tigris and Euphrates in IRAQ, and the intervening fertile crescent countries of ISRAEL, JORDAN, LEBANON, and SYRIA; and North Africa, a band of watered mountains and plains set between the SAHARA DESERT and the Mediterranean Sea. Known by the Arabs as al Maghrib al Aqsa (“Land of the Setting Sun”), it includes the nations of TUNISIA, ALGERIA, MOROCCO, and LIBYA.
THE NORTHERN HIGHLANDS
The Taurus and ZAGROS MOUNTAINS of southern Turkey and western Iran form a physical and cultural divide between Arabic-speaking peoples to the south and the plateau-dwelling Central Asian people of Turkey, Iran, and Afghanistan. Around one-third of the people of the Middle East and North Africa live in the Northern Highlands, on the Anatolian and Iranian plateaus and the flanks of the HINDU KUSH range of Afghanistan.
Turkey is a large, rectangular peninsula plateau bounded on three sides by water—the BLACK SEA on the north, and the AEGEAN SEA to the west, and south. The Turkish coast is rainy, densely settled, and intensively cultivated. About 40 percent of the population is clustered onto the narrow, wet Black Sea coast, on the lowlands around the Sea of Marmara in both European and Asiatic Turkey, along the shores of the Aegean, and on the fertile Adana Plain in the southeast.
By contrast, the center of Turkey—the dry, flat Anatolian Plateau—is sparsely settled. Cut off by the Pontic Mountains to the north and the Taurus to the south, the dead heart of the plateau is too dry to sustain dense agricultural settlement; in the east, the rugged terrain of the Armenian highlands limits agricultural development.
Although the environmental base of Iranian society is similar to Turkey's, the topography is more dramatic, and contrasts are more sharply drawn. High mountains ring the dry Iranian Plateau on all sides except the east.
In the west and south, the folded ranges of the Zagros Mountains curve southeastward for a distance of 1.400 mi (2,253 km) from the northwest Turkish frontier to deserts of Sistan in the southeast. In the north, the steep volcano-studded ELBURZ range sharply divides the wet CASPIAN SEA coast from the dry Iranian interior. The encircled plateau covers over half the area of Iran, with large uninhabited stretches of salt waste in the Dashti Kavir to the north and of sand desert in the Dashti Lut to the south.
Along the Caspian Littoral, which receives up to 60 in (152 cm) of rainfall per year, the intensive cultivation of rice, tea, tobacco, and citrus fruits supports a dense rural population. Similarly, in AZERBAIJAN in the northwest and in the fertile valleys of the northern Zagros, rainfall is sufficient to support grain cultivation without irrigation. But in the rest of Iran, rainfall is inadequate and crops essentially require irrigation. Oasis settlement based on wells, springs, or underground horizontal water channels called qanats is common.
In the small remote country of Afghanistan, the easternmost nation of the Northern Highlands, the processes of population growth, agricultural expansion, and urbanization have barely begun. The country's center is occupied by the ranges of the Hindu Kush; a rugged, snowbound highland that is one of the least penetrable regions in the world. Deserts to the east and south are cut by two major rivers, the Hari Rud and the Helmand, both originating in the central mountains of Afghanistan and disappearing into the deserts of eastern Iran. In the north, the AMU DARYA (Oxus) flows into the Russian STEPPE. Settlements are found in scattered alluvial pockets on the perimeter of the Hindu Kush, where there is level land and reliable water supplies.
Over 70 percent of Afghanistan's population lives in the scattered villages as cultivators of wheat and barley and herders of small flocks of sheep and goats. An additional 15 percent are nomadic tribesmen, whose political power is still felt in this traditional society. In central and eastern Afghanistan, Pathans are dominant; in the north, the Turkish-speaking Uzbeks and Persian-speaking Tadzhiks predominate.
THE ARABIAN PENINSULA
The Arabian Peninsula may be described as a great plateau sloping gently eastward from a mountain range running along the whole length of its west side. It is a huge desert fault bounded on three sides by water and on the fourth by the deserts of Jordan and Iraq. In the west, the rugged slopes of the Hijaz and the highlands of YEMEN form the topographical spine of this platform. The remainder tilts eastward to the flat coasts of the Persian Gulf, rising only in the extreme southeast to the height of the Jabal al Akhdar (Green Mountains) of Oman.
Although the peninsula is the largest in the world and nearly four times the size of the state of TEXAS, it supports a population of less than 18 million people. The majority of these people live in two nations: SAUDI ARABIA (25.79 million, excluding 5.57 million non-nationals), which governs nine-tenths of this region, and Yemen (20 million), whose highlands trap sufficient moisture to support cultivation without irrigation. Smaller states on the eastern and southern perimeters of the peninsula include KUWAIT, QATAR, the UNITED ARAB EMIRATES, and OMAN.
Although the TROPIC OF CANCER bisects the Arabian Peninsula, passing south of Medina, Riyadh, and Muscat, most of the southern half of the peninsula is too high or isolated to be characteristically tropical, the main exception being the lowland coast. The principal historical determinant of human settlement in the peninsula has been the availability of water. Overall, the region receives less than 3 in (7.62 cm) of rainfall each year, with a bit more in the north. Only the highlands of Yemen and Oman at the southern corners of the peninsula receive more than 10 in (25.4 cm). Daily temperatures commonly rise above 100 degrees F (37.7 degree C).
Fully one-third of the central plateau is covered by a sea of shifting sand dunes and much of the rest lies under boulder-strewn rock pavement. In the southern desert, the forbidding Rub al Khali (Empty Quarter), wind-worked dunes 500 to 1,000 ft (152 to 305 m) high, cover an area of 250,000 square mi (402,336 square km) to form a bleak, rainless no-man's-land between Saudi Arabia and the states of the southern coast. Arching northward from the Rub al Khali, a 15- mi- (24-km-) wide river of sand, the Ad Dahna, connects the southern sands with the desert of Nafud 800 mi (1,287 km) to the north.
Given this harsh environment, the Arabian landscape has no permanent lakes or streams. Vegetation is sparse. Settlement is confined to oases, and only 1 percent of the region is under cultivation. Vast stretches of the peninsula are completely uninhabited, devoid of human presence except for the occasional passage of Bedouin camel herders.
Within this difficult physical setting, two-thirds of the people of the Arabian Peninsula are rural agriculturalists, seminomads, and nomads. Their lives focus on oasis settlements where wells and springs provide water for the cultivation of dates—the staple food of Arabia—and the maintenance of herds of camels, sheep, and goats. The distribution of these oases is determined by a network of dry river valleys (wadis) carved into the surface of the plateau in earlier and wetter geological periods. These wadis provide the most favored locations for commercial and agricultural settlement and the most convenient routes for caravan traffic.
In the western highlands, where population density is above average, the largest urban centers are Mecca, Medina, and Taif. In central Arabia, underground water percolates down from these uplands and surfaces through artesian (gravity-flow) wells, creating a string of agricultural oases both north and south of the Saudi Arabian capital of Riyadh. Farther east, on the shores of the Persian Gulf, this same water emerges as freshwater springs in Kuwait, eastern Saudi Arabia, and the United Arab Emirates. Similarly in South Yemen, springs in the Wadi Hadramuat, a gash several hundreds miles long parallel to the coast of the Gulf of Aden, provide the basis for oasis settlement. Only in Yemen and Oman is this dryland oasis pattern broken.
Today, oil resources in Saudi Arabia, Kuwait, the island state of BAHRAIN, the United Arab Emirates, and, to a lesser extent, Qatar and Oman are providing the capital for rapid economic growth, leaving the southern states of Yemen and South Yemen in isolated poverty. In the gulf, cities like Dhahran, Dhammam, Ras Tannurah, Kuwait City, Manamah (the capital city of Bahrain), and emirate centers like Abu Dhabi, Dubai, and Sharjah are creations of the oil industry. Less directly but equally dramatically, the traditional centers of Riyadh (population: 3.5 million), Mecca (550,000) and Jeddah (2.8 million) are growing rapidly as farmers and Bedouins seek salaried employment in expanding urban industries.
THE CENTRAL MIDDLE EAST
The Central Middle East is flanked to the west and east by two great river valleys, the Nile of Egypt and the Tigris-Euphrates system in Iraq. Between these riverine states, the small nations of Israel, Lebanon, and Syria line the shores of the eastern Mediterranean; Jordan is LANDLOCKED.
The environments of these nations are as complex as their histories. Four millennia of human civilization have left an essentially denuded landscape—barren hills, steppes overgrazed by sheep and goats, and rivers chocked by the erosional silt of human activity. In Egypt, the Nile Valley, a narrow trough 2 to 10 mi (3.2 to 16 km) wide cuts northward across the dry plateau of northeastern Africa to the Mediterranean Sea. East of the Nile Valley, the heavily dissected Eastern Highlands border the coast of the Red Sea, continuing past the Gulf of Suez into the SINAI PENINSULA. Barren and dry, these highlands are occupied by nomadic herders.
The sources of the Nile River lie 2,000 mi (3,218 km) south of the Mediterranean in the wet plains of the Sudan and the equatorial highlands of East Africa. The Nile's largest tributary, the White Nile, originates in Lake Albert and Lake VICTORIA and flows sluggishly through a vast swamp, the Sudd, in southern Sudan, before entering Egypt. The other major tributaries, the Blue Nile and the Atbara River, flow out of the Ethiopian highlands, draining the heavy summer rains of this region northward toward the desert. This summer rainfall pours into the Nile system, causing the river to flood regularly from August to December, and raises its level some 21 ft (6.4 m).
For centuries, this flood formed the basis of Egyptian agriculture. Specially prepared earth basins were constructed along the banks of the Nile to trap and hold the floodwaters, providing Egyptian farmers with enough water to irrigate one and, in some areas, two crops of wheat and barley each year.
In the 20th century, British and Egyptian engineers constructed a series of barrages and dams on the Nile to hold and store the floodwater year-round. This transformation of Nile agriculture was largely completed in 1970 with the construction of the Aswan High Dam, a massive earthen barrier more than 2 mi (3.2 km) across, 0.5 mile (0.8 km) wide at the base, and 120 ft (37 m) high.
Behind it, Lake Nasser, the dammed Nile River, stretches 300 mi (483 km) southward to the Sudanese border. The dam added about one-third to the cultivated area of Egypt. Hence, Egyptian population expanded from estimated 10 million at the turn of the century to its current 76 million. During this same period, urban population expanded eight times, and Cairo (7.76 million) and Alexandria (3.9 million) emerged as the two largest cities on the African continent.
In contrast to Egypt, the central problem in the other great river valley of the Middle East, the Tigris-Euphrates of Iraq, is not overpopulation but environmental management. Both these rivers rise in the mountains of eastern Turkey and course southward for more than 1,000 mi (1,609 km) before merging in the marshes of the Shatt al Arab. North of Baghdad, both rivers run swiftly in clearly defined channels. To the south, they meander across the flat alluvial plains of Mesopotamia. East of the valley, the Zagros Mountains rise as a steep rock wall separating Iraq from Iran.
To the west, a rocky desert plain occupied by nomadic herders stretches the borders of Saudi Arabia, Jordan, and Syria. Only in the northeast, in the Kurdish hills, does rainfall sustain non-irrigated cultivation. Elsewhere in Iraq, human existence depends on the water of the Tigris and Euphrates rivers. But unlike Egypt, where every available acre of farmland is intensively utilized, Iraq's agricultural resources are largely wasted.
The Tigris and Euphrates rivers have always proved less manageable than the Nile. Fed by melting snows in Turkish highlands, spring floods 8 to 10 ft (244 to 309 cm) above normal pour down the river channels to Baghdad and then spread out over the vast plains of Mesopotamia, where the land is so flat that elevations change only 4 to 5 ft (122 to 152 cm) over distances of 50 mi (80 km).
In the Fertile Crescent countries of Syria, Lebanon, Jordan, and Israel, which lie along the eastern coast of the Mediterranean between the river valleys of Egypt and Iraq, environmental patterns are extremely complex. The coastal plain, narrow in the north but widening southward, is backed by dissected, rugged highlands that reach elevations of more than 10,000 ft (3,048 m) in Lebanon. Throughout their length, these uplands have been denuded of forests, notably the famous cedars of Lebanon, by centuries of overgrazing and cutting for economic gain. Winter rainfall is plentiful in the north but less in the south. In Syria, the highlands capture this ample rainfall in stream that support life in the oasis cities of Aleppo, Homs, Hama, and Damascus. In Lebanon and northern Israel, runoff from the highlands sustains important commercial and agricultural areas along the coast. Further south, the highlands flatten out into the rainless wastes of the NEGEV DESERT.
Inland, a narrow belt of shallow, flat-bottomed intermontane valleys separates these western highlands from the dry uplands plateaus and mountains of the east. Between Israel and Jordan south of the Galilee, the Jordan River flows along one of these valleys 150 mi (241 km) southward to the Dead Sea, 1300 ft (396 m) below sea level. Farther north, a similar trough in Lebanon, the Beqaa Valley, is drained by the Litani and Orontes rivers. East of these lowlands, rugged highlands grade inland to the grass-covered steppes of Syria in the north and the dry stone pavement of the Jordanian desert in the south. In this varied terrain, the distribution of population is extremely uneven.
North Africa is the largest subregion of the modern Middle East, covering an area larger than the United States, but inhabited by only 50 million people grouped together on the southern shore of the Mediterranean Sea between water and sand. Much as Egypt is truly the gift of the Nile, cultural North Africa is the result of a physiographic event, the ATLAS MOUNTAINS, which separate the Sahara Desert from the Mediterranean Sea and Europe beyond.
Most of the territory of the modern nations of Maghreb-Morocco, Algeria, Tunisia, and Libya consists of Saharan wastelands that stretch 3,000 mi (4,828 km) across Africa from the Atlantic Ocean to the Red Sea. One-seventh of this area is sand dunes; the remainder is rock-strewn plains and plateaus. Aridity in the Sahara is not interrupted even by the jutting peaks of the AHAGGAR and TIBETSTI massifs at 6,000 ft (1,829 m) which receive as little as 5 in (21.7 cm) of rainfall per year. Here, as well as in other scattered Saharan oasis environments, an estimated 3 million people wrest a living from what is Earth's most difficult cultural environment outside the polar regions. Only in the north, along the mountain-backed coast of the Mediterranean, is rainfall sufficient to sustain substantial concentrations of people. The Atlas Mountains form a diagonal barrier isolating the nomads of the deserts and steppes of the south and east from sedentary agriculturalists in the Mediterranean north.
The Spanish Sahara was the only North African country that was totally desert. Before the recent discovery of extensive phosphate deposits in the north and the possibility of rich iron ore lodes, this territory was of little interests to anyone, but both Morocco and Mauritania have claimed sovereignty. With the departure of SPAIN in 1976, the territory has been divided between its two larger neighbors.
In Morocco, the Atlas Mountains form a succession of four mountain ranges dominating the landscape. In the north, the Rif, which is not geologically associated with the Atlas, is a concave arc of mountains rising steeply along the Mediterranean, reaching elevations of 7,000 ft (2,134 m) and orienting Morocco toward the Atlantic. In the center of Morocco, the limestone plateaus and volcanic craters of the Middle Atlas reach elevations of 10,000 ft (3,048 m); contact with Algeria is channeled through the Taza corridor, and this mountain barrier has isolated the Moroccan Sahara until modern times. Farther south, the snow-capped peaks of the High Atlas attain elevations of 13,400 ft (4,084 m) and separate the watered north from life in the Sahara. Finally, the Anti-Atlas, the lowest and southernmost of the Moroccan ranges, forms topographic barriers to the western Sahara. Historically, the Atlas range has provided a refuge for the original Berber-speaking inhabitants of Morocco, whose descendants today make up half the nation's population.
Throughout mountainous Morocco, Berber populations maintain an agrarian tradition of transhumance of goats and sheep wedded to cultivation of barley, centered around compact mountain fortresses. Density of settlement in the mountains depends on rainfall, which in general diminishes from west to east, and on altitude, which prohibits year-round settlement because of cold winter temperatures in areas much over 6,000 ft (1,829 m). Most of Morocco's 32.2 million people are Arabic-speaking farmers who till the fertile lowlands plains and plateau stretching from the Atlantic to the foothills of the Atlas.
Farther east along the Atlas complex, the primary environmental contrast in Algeria is once again the distinction between the fertile, well-watered, and densely settled coast and mountain ranges of the north and the dry reaches of the Sahara Desert in the interior. The Algerian coast is backed by the Tell Atlas, a string of massifs 3,000 to 7,000 ft (914 to 2,133 m) in elevation, which have formed an important historical refuge for Berber-speaking tribes. In the interior, a parallel mountain range, the Saharan Atlas, reaches comparable elevations in a progressively drier climate. Between these two ranges in western Algeria, the high plateaus of the Shatts, a series of flat interior basins, form an important grazing area. In eastern Algeria, the two ranges of the Atlas merge to form the rugged Aures Mountains. South of these ranges, Algeria extends 900 mi (1,448 km) into the heart of the Sahara.
In Tunisia and Libya, the topography is less dramatic than in Morocco and Algeria, but the same environmental sequence from northern coast to southern desert prevails. Two-thirds of Tunisia's 9.9 million people live in the humid northeast and in the eastern extension of the Arres Mountains. The central highlands and interior steppes, marginally important in the past, have become sites of innovative development projects. Tunisia remains an example of self-motivated and successful state planning. In Libya, the population (5.6 million) is concentrated on the coast in the hilly back country of Tripolitania and Cyrenaica. |
Since time immemorial, man has been putting forward different theories to explain the origin of life and man himself on earth and how different kinds of life originated on earth and changed from time to time. Since ancient Greek’s time written records have been left about the views held by the people of that period and subsequent changes in the evolutionary theories. Three distinct periods can be identified based on the transformation in the evolutionary thoughts, namely, Ancient Greek theories, Pre-modern theories and Modern theories as given below.
Thales (639-544 BC): He lived in the Ionian colonies on the coast of Asia Minor. He left no writings but was a profound thinker and travelling and studying in Egypt, educated himself. He was a merchant, engineer, mathematician, astronomer and philosopher. He even coined the word “philosopher”. He believed that the whole universe was governed by natural laws and observed that water was the most abundant material on earth and that all life originated directly from water. According to him earth was a solid disc floating on seawater.
Anaximander (611-547 BC): He was a pupil of Thales and had broad views on the origin of nature. He believed that intermixing of earth, water, air and fire produced life. Every life has an ethereal substance, “apeiron” that is endless, unlimited and does not age or decay. He thought that stars were holes in the sky through which fire flowed. A total eclipse was due to closing of such a hole. He was the first one to describe earth as a sphere but believed that it was the centre of the universe. He designed the first map of the world and published the results of his researches in a poem, On Nature. He believed that life originated from primordial fluid spontaneously and that the first animals were fish, their descendants later reached land. Later they modified their mode of living according to the environment. Man was supposed to have come from lower species, perhaps an aquatic one. Man burst out from this fish-like animal as a butterfly comes out of the pupa.
Xenophanes (576-490 BC): He was a pupil of the mathematician Pythagoras. He identified fossils of aquatic animals on mountain lands and declared that mountains were once covered with water. He correctly interpreted fossils as remains of past animals but later workers; including Aristotle did not understand him.
Empedocles (504-433 BC): According to his theory, the four basic elements, namely, fire, water, earth and air originated from a combination of four fundamental qualities, viz. hot, dry, cold and wet and then acted upon by love and hate. All animals and plants originated from different combinations of these elements. Man had these elements in more refined and evenly mixed state. He boasted of his supernatural powers to cure and heal, bring rain, change the direction of wind etc. He is known to have rid a town of malaria by arranging drainage of swampy districts. His writings bear an impression of his belief in the survival of fittest. The suggestion that earth once had greater creation power than existed during his era also reflected his evolutionary belief.
Democritus (470-380 BC): He was deeply interested in travelling and gaining knowledge and extensively travelled in Egypt. He wrote about 70 books. Greatest contribution is his atomic theory, according to which universe is made of atoms, which move in space and all physical changes are due to union and separation of atoms. Spread of diseases was due to particles coming from atoms from other planets. He dissected animals, including human beings and described complexity of organs and relationship of these among lower and higher organisms. He considered brain to be the organ of thought and centre of all activity. He was more accurate than Aristotle who thought heart to be the centre of vital activity. He was the first Greek to attempt a classification of animals, on the basis of presence or absence of red blood. He claimed that spider’s web was produced inside the spider’s body, whereas Aristotle thought that it was cast off skin. He also explained sterility due to presumed contraction of uterus. Correct explanation followed much later after chromosomal study with microscope.
Aristotle (384-322 BC): He was one of the greatest a biologist ever lived and was interested in all knowledge available in his day. He wrote 146 books, in which he included everything known at that time. Many books have since been lost. He learned medicine from his father and went to Plato’s school in Athens at the age of 17 and lived there for 20 years. He studied marine biology in the island of Lesbos. He demonstrated scientific method of observation of things in nature, but did not undertake experimentation. His observations and interpretations on marine animals were remarkably accurate. Without any instruments he made observations on small objects like eggs and embryos of fish and molluscs. He traced the development of Octopus and Sepia from egg to adult stage. He also studied adaptations in sea animals and migration in fishes. He classified animals into Vertebrates (red blooded) and Invertebrates (without having red blood). He classified dolphins and whales in mammals contrary to the existing belief. At the age of 42, he was called by King Philip to teach his son Alexander, who later became more interested in military pursuits. When Alexander died in 323 BC, revolutionary forces came to power and Aristotle being close to him, he had to flee Athens to live in exile. He died a year later at the age of 62. His thoughts dominated for over 1000 years. Essence of his theory was that the force of intelligence, which is not found in non-living things, guides living things. Imperfect forms are gradually transformed into perfect forms. His classification gives a chain from lower to higher animals that he published as Scala Naturae (Ladder of Nature). He also believed in spontaneous generation theory of origin of life.
Epicurus (341-270 BC): He tried to combat superstitious beliefs. He thought the world as a natural phenomenon, governed by natural causes. He opposed Aristotelian argument of the grand design and purposefulness of events. He agreed with the atomic theory of Democritus but still believed in the spontaneous generation theory.
DECLINE OF SCIENCE
In the time to follow, Aristotle’s thoughts overshadowed every other thinking. Lucretius (99-55 BC) rejected much of Aristotle’s work and published his thoughts in his book, De Rerum Natura (On the nature of things). Pliny (23-79 AD) compiled a store of information in his “Natural History”, which was a good source of information. Galen (130-200 AD) was a physician who made investigations in anatomy and physiology.
Revival of classical learning in Romans and Greeks took place in 14-16th centuries. Leonardo da Vinci (1452-1519) realised that fossil shells on Apennine Mountains indicated that they must have been covered by sea in the past. He did not develop the idea further. Harvey (1578-1657) discovered circulation of blood. Little was accomplished in biology between Aristotle’s times to about 1500 AD. Aristotle’s thoughts prevailed during this period. By the end of second century AD, Greek science was virtually dead.
Francis Bacon (1561-1639): He was a philosopher who called upon men to seek knowledge by experiment and reasoning. He visualised a great plan of the origin and governing of earth and its inhabitants. He was an effective writer and a popular lecturer and gathered many facts but did not organise and coordinate them. He started a movement for free discussion and established academies for it. He only revived Aristotle’s ideas and said that variations caused evolution. He recognised that transitional forms are present connecting two groups but gave wrong examples that flying fish is transitional form between fish and birds and that bats connect birds and mammals.
Francesco Redi (1621-1697): He refuted spontaneous generation theory by experimentation with cooked fish ad dead snakes and showed that flies did not appear in closed jars. He presented his findings in a book, “Experiments on the generation of insects.”
De Maupertius (1698-1759): He developed a theory of evolution based on mutation, selection and geographical isolation but was not understood as he was far ahead of his time in scientific thinking. He developed a theory of heredity based on animal breeding and human heredity.
De Maillet (1656-1738): He studied fossils and pointed similarities between the aquatic and terrestrial forms and proposed that terrestrial animals evolved from aquatic ones but gave wrong examples of mermaids and flying fish. He entangled facts with myths and was afraid of church. He therefore attributed his unorthodox views to an Indian philosopher, “Telliamed”, which was his own name spelled backwards.
Carolus Linnaeus (1707-1778): A Swedish naturalist, he made extensive field trips (about 4600 miles) and collected lots of specimens of plants and animals. While in Holland he became a physician but continued to study nature and wrote 7 books from there. Later he became Professor of Botany at Uppsala. He promoted binominal nomenclature in which he skilfully used his predecessors’ works and published Systema Naturae in 1751. His classification reflected evolutionary relationship, although he believed in special creation theory.
Buffon (1707-1788): His actual name was George Loius Leclerc. He was a French naturalist, politician and writer and discussed evolution at length and published 44 volumes on natural history. He believed in inheritance of acquired characters and proposed that the factors that influence evolutionary changes are: direct effect of the environment, migration, geographical isolation, overcrowding and struggle for existence. He emphasised that new forms of life gradually develop from the old ones. But he compromised with the special creation theory and gave wild speculations. For example, pig was described as a compound of other animals, the ass as a degenerated horse and ape degenerated man. He was a prolific writer and interpreter but not an original investigator and his writings showed contradictions.
Erasmus Darwin (1731-1802): A British philosopher and grandfather of Charles Darwin, he gave clear statement on inheritance of acquired characters than Buffon. He wrote a book, Zoonomia in 1794, in which he proposed that life originated from a primordial protoplasmic mass. Age of earth was determined in million years. He concluded that species descended from common ancestors and that struggle for existence causes evolution. Charles Darwin wrote more from his grand father than originally supposed. For every volume written by Charles, there is corresponding chapter in Erasmus’ book. However, Charles found Zoonomia more speculative than scientific.
Louis Pasteur (1822-1895): He was a biochemist than a biologist and did some work on fermentation. He rejected spontaneous generation theory and proved by experimentation. He took 18 flasks that were left open outdoor and found that 16 of them developed organisms, while in 19 flasks that were kept closed in a lecture hall, only 4 developed organisms. Different results were shown when air was introduced from different sources, which made him aware of the presence of spores in air. He developed the theory of Biogenesis.
Spallanzani, Lazaro (1729-1799): An Italian physiologist, he also disproved the theory of spontaneous generation by experiments. He used different media and boiled them for one hour and sealed when hot, thus excluding all organisms. He was criticised by Needham for over-boiling the medium and destroying the vegetative force which was necessary for the growth of life. He was a proponent of Biogenesis, which means Life begets life.
Pages: 1 2 |
There are lots of different ways you can get
sick from food-borne toxins, but there are a few illnesses are so serious that they can
cause long-term problems, or even death. The ugliest, most dangerous bugs that cause food-borne
illness are: Campylobacter is a bacterium that causes fever,
diarrhea, and abdominal cramps. This pathogen is spread through eating raw or undercooked
chicken, or other foods that have been cross-contaminated with raw chicken.
Salmonella is also a bacterium, and it causes a disease called salmonellosis, which causes
fever, diarrhea, and abdominal cramps. In severe cases, it can also cause life-threatening
infection, particularly among people with compromised immune systems.
E coli. These bacteria are found in cow feces, and any illness in people is results from
ingesting food or water contaminated with microscopic amounts of dung. Disgusting, right?
This infection typically causes severe and bloody diarrhea, and painful abdominal cramps,
but without much fever. In a small percentage of people, the initial symptoms are followed
by a severe disease called hemolytic uremic syndrome, which can cause anemia, profuse
bleeding, and kidney failure. Norovirus, or Norwalk-like virus, is extremely
common It causes most cases of what people call “stomach flu.” The virus causes acute
gastrointestinal illness, with severe vomiting, that resolves within two days. This virus
spreads easily from person to person, which is why outbreaks tend to occur in “closed
population” settings, such as schools and child care facilities, nursing homes, dormitories,
and cruise ships. After you eat a microbe-infested meal, there
is a delay, or “incubation period,” before the symptoms begin. This delay ranges from
hours to days, depending on the organism, and on how many microbes were swallowed. Many
organisms cause similar symptoms, most commonly diarrhea, abdominal cramps, and nausea. There
is so much overlap that it is rarely possible to say which microbe caused your illness unless
laboratory tests are done, or unless the illness is part of a recognized outbreak.
Now that I’ve scared you, what should you do? First, make sure you get all the medical
care you need. Second, report the incident to public health authorities so the illness
can be investigated. Go to www.foodsafety.gov for state contact information. Third, collect
and save evidence. That might mean packaging from the food you believe made you ill, receipts
from a restaurant meal, names and contact information of witnesses, or anything else
that supports your claim. Also, write down information about your illness while your
memory is fresh. Include the date your symptoms began, what your symptoms felt like, days
taken off work, and any other way the illness affected your life. Finally, if your injuries
are severe or long-lasting, contact HensonFuerst Attorneys. Our experienced food-borne illness
lawyers are here to listen, and to help determine what rights and obligations you have. |
This look of wonder and amazement in our students is what we strive for in each of our classes.
A question was posed on the LEGO Foundation Ideas Conference Forum: How can we measure learning through play?
The background of the question was:
“Measurements of learning is currently driven by a discussion of standardized tests in schools, which comes with a risk of teaching to the test, and not focusing on the soft skills with children’s motivation for learning and lifelong outcomes. Who are measurements actually for? And how can we provide new ways of measuring the critical soft skills, like collaboration, creativity and critical thinking, at the same time as making them relevant for the everyday situations in the home, and practices in the classroom?”
We at Play-Well have been asking, answering and re-asking this question for the past 18 years. After teaching over 500,000 kids, we have come to a few truths about play:
- You can get kids excited about learning through play.
- Children absorb and remember information when they are fully engaged, especially through play.
- While it cannot replace scholastic practice in the classroom, play can be used to successfully explain and exemplify complicated academic concepts.
We know that play is powerful. We see it every day in our classes and hear it from parents. One parent relayed a story to us about a kindergartener: after one of our classes, who went to the playground, slid down the slide, and said to himself, “wow, this slide has a lot of friction!”
So, how do we measure this knowledge? That is where it gets tricky. Not only because the goals of each class are unique and difficult to pin down, but also because it forces us to confront an uncomfortable truth: we shouldn’t measure play.
Do we undermine the self-direction of play through measurement?
The entire premise of play is that it is self-directed and open-ended. Kids might play to explore the world, solve problems, or express who they are. These are only a few of the reasons children engage in play, and each has its merits in creating a well-rounded child.
Our friends in Montessori education have been strong advocates for self-direction in education: children may choose an activity and work on that activity until they feel they have completed it. They are done when they believe they have completed it, without the interference of their teacher. They understand that self-direction empowers children and that confidence helps create life-long learners.
Any attempt to measure organic play limits the open-ended nature of it; and in doing so, we may unintentionally saddle children with adult expectations or ideas of what “success” is.
How effective would measurements on play be?
We could come up with metrics to measure some aspects of play, but we must ask ourselves: would the data we received be worth the potential harm created in collecting it?
Our most satisfying times in the classroom, as instructors, are when our students have epiphany moments. We know that we can create the environment for those opportunities to happen, but it is out of our control as to when they happen.
Let’s revisit the kindergartener experiencing friction on the slide. We had reviewed that term numerous times through playing that week, so at some point it resonated with him. How do you measure that? When did the connection between our class and the slide occur? Did he have the epiphany on the slide or somewhere else? Does it matter? Furthermore, in peppering the child with questions attempting to solve the mystery, we ruin the positive association that child had with learning about friction. In the journey from qualitative play to quantitative measurement, the true magic of play will be lost in translation. It will fall short of what is possible if we just allow kids to explore the world for themselves, at their own pace, and trust that learning will happen.
A teacher in the U.S. recently wrote a resignation letter, stating that she needed to step down because she believed her profession no longer existed. With so much of her job being about standardized tests and constant measurement, her ability to actually be a teacher, allowed to play and experiment to get her kids excited by learning, was gone. By forcing common standards of teaching in the U.S., the powers that be had stifled this teacher’s ability to do her job in a way that spoke to her children.
In the play setting, who is the better teacher? The adult or the child?
So, given all the risk, why would we evaluate play or use play as a measurement tool? We love our children and we recognize play as nourishment for young minds. We want to support that in any way possible and we want that support to be based in peer-reviewed study. This is where we hit the crux of the questions posed: who are measurements actually for?
In simplistic terms, measurements are for adults, and play is for kids.
If you were to ask a child at play, “are you having fun?” She would say, “yes.” If you asked her to articulate why she is having fun, you’ll probably hear, “I don’t know, it just is.” She might not fully understand why she does what she does, or what she is learning when she plays, but it is happening. Children submit their bodies, their minds and their spirits to whatever creative world they are traveling through when they play and they do so without judgment or expectation. You can see it in the way their limbs hang when they are being carried to bed after a long day of play: that child gave all of himself to his adventure today. The fullness with which children embrace and indulge in their experiences is something from which we adults can learn. So let us take an opportunity to embrace the process of play without analysis of the results. Can play be a valuable learning tool or method of measurement? Yes. How can we prove it? We shouldn’t bother trying. Or as a child would say, “it just is.”
What will save us? Perhaps Play.
Ken Robinson, in his lecture about schools killing creativity, explains how the musical Cats almost didn’t happen. The most successful musical of all time only happened because the creator was pulled out of a regular classroom and identified by a teacher as being a dancer, instead of someone who just couldn’t sit still in class. Ken explained that world-changing potential is sitting in our classrooms, but we need to allow kids to play if they are going to understand who they want to be. We as adults must exercise some restraint and allow children to experience that process uninhibited by our desire to understand it. We must treat play as sacred and do all that we can to keep it whole. This is how we can advocate for children and also for ourselves. Because the next great solution, the life-changing invention, the cure for cancer–these things won’t come from a mind that can merely think outside the box; they will come from a mind that thinks the box doesn’t exist.
Contributors To This Article: Erik Olson, Maddy Gabor, & Jeff Harry |
A circuit with two independent and two dependent sources is solved by the superposition method. Independent sources are turned off one at a time and the contribution of the on source is calculated. Dependent sources should not be turned off.
A circuit with two voltage sources and two current sources is solved by the superposition method. The contribution of each source is calculated individually and the response is found by adding the contributions.
Turning off a source used in solving circuits with the superposition, means setting its value equal to zero. A voltage sources become a short circuit when turned off. To turn off a current source it should be replaced by an open circuit. Dependent sources cannot be turned off. |
While we tend to plan for the future based on current climate inputs and observations, we should also look ahead and take into consideration the dramatic turn of events that could result from positive feedback mechanisms.
A brief overview of feedback mechanisms
Feedback mechanisms are processes which are the direct consequences of other events. Basically, there are two types of feedbacks: positive feedbacks which amplify the on-going trends and negative feedbacks which soften them. As an illustration, two examples of simple feedback mechanisms are given below:
Positive feedback: The warmer the climate (e.g. heat waves), the more energy we consume (e.g. air conditioning), the more GHGs are emitted and therefore the warmer the climate will get. We tend to forget that human behaviour in response to a changing world is in fact a powerful feedback mechanism.
Negative feedback: The warmer the climate; the more water vapour in the atmosphere at any given time; the more clouds which have white surfaces reflective of incoming solar radiations; the cooler the temperatures. Note however that water vapour is also a strong greenhouse gas and therefore also has a positive feedback effect.
Natural equilibrium versus destabilisation
Over billions of years of Earth history, the planet has been subject to a series of dramatic events. In between such events, systems on Earth have reached equilibriums which are governed by closed loop systems and a balance between positive and negative feedback mechanisms.
The current trend however is for human activities to modify some of these natural equilibriums by increasing disorder in the Earth systems.
Many signs of upcoming changes from positive feedbacks are already highly noticeable and measurable (e.g. release of methane gas from the melting permafrost; sea ice melt exposing darker ocean surfaces in the Arctic…). All seem to indicate that positive feedback mechanisms are on the rise and will intensify throughout the century, largely overtaking negative feedback effects which would normally act as a buffer in the systems.
Most climate models (which shape policy decisions) are based on predictions which account for a range of inputs such as emissions scenarios (global amount of GHGs emitted); however few if any seriously consider how systems will evolve when feedback mechanisms are added into the equation.
The main reason is that systems are incredibly complex. Modelling some of these impacts is proving very difficult, especially certain aspects such as atmospheric water vapour that have both positive and negative feedback effects.
However, reasoning on logical deductions and how systems are likely to react should lead the way to anticipation and a more cautious approach. Some of these positive feedbacks really have the potential to shift the situation from currently alarming to dramatic and irreversible in a short time.
Citing David Suzuki: “We are playing Russian roulette with features of the planet’s atmosphere that will profoundly impact generations to come. How long are we willing to gamble?”
An unwanted but possibly unavoidable solution
If positive feedbacks become a factor as predicted by many climate scientists, our only option may end up being deliberate human negative feedback on a massive scale. We may be forced to attempt to counter warming trends through technological inputs, the most extreme of these actions being Earth Engineering.
Earth Engineering, which is the process of humans voluntarily modifying natural systems on a global scale, is however not without serious consequences and is often considered a last resort.
It may seem like science-fiction but ideas such as injecting sulphur compounds into the upper atmosphere; producing artificial fine clouds on a large scale; fertilising oceans to absorb carbon dioxide; deploying a reflective membrane in the Artic or even sending millions of small mirrors into space to divert the sun rays are ideas which have been proposed and seriously considered to cool down the Earth.
The business case
However, in addition of being extremely risky, all of these engineering measures are also astronomically expensive and far more costly than mitigation measures such as cutting down current GHG emissions and shifting to a low carbon economy.
It is clear from an economical point of view that the cost of dealing with the consequences of climate change will far exceed the cost of mitigating it. This argument has been given over and over again by some leading economists (e.g. The Stern report).
Furthermore we should act fast because we are running out of time. Indeed, because GHGs remain in the atmosphere for extended periods of time, they have been accumulating. We are reaching the point of no return after which the current focus on reducing GHG emissions will no longer make much difference on the outcome (i.e. even the smallest effects will already be dramatic).
Once this point is passed, the next step will probably be to try to remove certain GHGs from the atmosphere on a large scale (which may simply take too long) followed by the final option of earth engineering.
The more we wait, the more extreme and outlandishly expensive the solutions become.
We are truly racing against time and need to intensify our efforts of cutting down GHG emissions through the deployment of renewable energy; shifting away from fossil fuels and intensifying research to come up with new technological and society solutions.
These actions are the ingredients we need to fuel a new economic revolution based on a low carbon economy. The threat of positive feedbacks should be seen as yet another argument to renew our efforts.
Sylvain Richer de Forges is head of sustainability at Siloso Beach Resort.
Thanks for reading to the end of this story!
We would be grateful if you would consider joining as a member of The EB Circle. This helps to keep our stories and resources free for all, and it also supports independent journalism dedicated to sustainable development. For a small donation of S$60 a year, your help would make such a big difference. |
A group of astrophysicists created an unprecedented detailed map of the polarization of infrared radiation in the vicinity of a black hole in the center of the Milky Way.
A group of astronomers led by Pat Roche of Oxford University and Chris Packham of the University of Texas at San Antonio measured the polarization of the infrared radiation emanating from the center of the Milky Way and plotted a magnetic field map in the center of the Galaxy with a record resolution , twice the resolution of the previous card.
In the center of our Galaxy, like most others, there is a supermassive black hole – SgrA *. It does not emit its own light, and therefore it is impossible to photograph it, but it radiates matter in all its surroundings in all ranges. This radiation can be registered with powerful telescopes. This time, scientists used the Large Canary Telescope and measured the radiation coming from the center of the Galaxy in the infrared region of the electromagnetic spectrum.
Passing cosmic radiation through a polarization filter, scientists measured its polarization in the region of interest in the Galaxy. Making a correction for the clouds, which almost did not exist over Hawaii for seven days of observation, and for cosmic dust, on which radiation is scattered along the way to the Earth, scientists obtained a polarization map of IR radiation in the vicinity of a black hole with an unprecedentedly higher resolution.
On the map, two regions of high polarization and magnetic field lines along which interstellar matter is built up are clearly visible-hot gas and dust (thin white threads on the image). The size of the region studied is about 1 x 1 st. of the year. Near the black hole, the bundles of filaments change direction.
The color indicates the intensity of infrared radiation. Credit: Oxford University / Royal Astronomical Society / UTSA
The scientists noted that even bright stars located in this region of the Galaxy, although they have their own strong magnetic fields, but have little effect on the polarization of the radiation around them. The nature of the source of the magnetic field that permeates the center of the Galaxy is not exactly known, but astronomers suggest that it is somehow connected with the gravitation of the black hole. |
What are the Greenhouse Gas emission reductions (GHG reduction) caused by solar energy?
A Greenhouse Gas (GHG) is a gas in the atmosphere that traps heat between itself and the earth’s surface. Thus it has a warming effect. The main contribution to warming on earth (a 2010 estimate ) comes from water vapor and clouds (75%), carbon dioxide (20%), methane, ozone, and some other gases and aerosols contributing the remaining. The greenhouse effect of these gases was originally a blessing of nature because without these the average atmospheric temperature would be about 15°C lower. That would make life on earth very difficult. At the same time, however, if the effect increases even slightly, the temperature will rise (global warming), and threaten human life.Unfortunately, with industrialization, a lot of fossil fuel is being burnt which leads to increase of greenhouse gases in the atmosphere.
Preserving the Status Quo
Generation of alternate energy (meaning alternate to fossil fuel) like solar, wind etc., does not directly involve generation of greenhouse gases, and is environment friendly. However, to start with, some fossil fuel must have been burnt to produce the infrastructure for a given type of energy. The mass of greenhouse gases (GHG) produced to set up, say, a solar PV system, divided by the expected total energy output expected during the life cycle of the (total) system is called the GHG equivalent (GHGe) of a PV project. GHGe of distributed solar PV using current technology has been estimated to be 46 g / KWh. In comparison, GHGe of oil is 893g/KWh, and of coal, 513-994 g/KWh.
If we use solar PV instead of oil to generate energy, for every KWh we avoid emission of (893-46) or 847g/KWh of GHG. That is a huge reduction in GHG emission (compared to oil energy). That is one of the prime reasons for the drive towards alternate energy of which solar energy is an important one. These emission figures are averages. Actual values depend on the actual implementation of a project. As technology improves, GHGe of solar energy will also reduce, increasing the value of emission reductions.
GHG equivalent of distributed solar thermal is estimated currently at 22g/KWh, which means an emission reduction of (893-22), or 871g/KWh compared to oil.
To quantify the emission reduction effort of various governments and business entities, the emission reduction unit (ERU) has been defined as any activity which avoids the generation of one ton of carbon dioxide (or its GHGe). |
What are the parts of speech?
- Nouns – They are names of a person, river, country, animal, thing, planet, a sea, a galaxy, a fruit, a weapon, a food, part of the body.
- Adjectives – Any word that can tell how big a noun is, how tall the noun is, how old the noun is, how quick the noun is, how many the noun is, what color the noun is, etc.
- Pronouns – Words used instead of a noun to avoid repetition.
- Verbs – Verbs are actions done by a noun.
- Adverbs – Adverbs say how a noun did a verb. “Arnav ran fast.”
- Prepositions – |
« AnteriorContinuar »
THE SEDGE WARBLER.
EAVING the woods, gardens, and plantations, and proceeding to the river side, we meet with a very different class of birds—the river warblers. This is a very numerous family, and were we about to treat of all the known species, it might be advisable for simplicity's sake to group them into sub-families. As we are confining our attention, however, for the present, to those species only which have been met with in the British Islands, it will be less confusing if we dispense with this subdivision, and notice
them under the same generic name—Salicaria. The various members of this genus may be distinguished by their short wings, rounded tails, tarsus longer than the middle toe, large feet, long and curved claws, and large hind toe with strong curved claw. They differ, too, from other warblers in their habit of singing at night. There are eight species which have all more or less a claim to be included in the British list, although three only can be regarded as regular summer migrants. These three are the Sedge Warbler (S. phragmitis), the Reed Warbler (S. strepera), and the Grasshopper Warbler (S. locustella). The others are Savi's Warbler (S. suscimoides), the Aquatic Warbler (S. aquatica), the Marsh Warbler (S. palustris), the Great Reed Warbler (S. arundinacea), and the Rufous Warbler (S. galactoides). The Sedge Warbler and the Reed Warbler generally arrive much about the same time in April, but, from some unexplained cause, the latter is much more restricted in its distribution than the former. The Sedge Warbler is found throughout the British Islands, but the Reed Warbler is almost unknown in Ireland, and its nest has only once been met with in Scotland." As a rule, it is seldom, if ever, to be seen further north than Yorkshire and Lancashire, and does not breed either in Devon or Cornwall. It may thus be said to be almost confined to the eastern, midland, and south-eastern counties of England. Beyond the British Islands, too, it is less erratic in its movements than its congeners. The Sedge Warbler visits Scandinavia, Russia, and Siberia, and is found throughout Europe in summer, and in North Africa and Asia Minor in winter. The late Mr. Andersson sent specimens even from Damaraland, S.W. Africa. The Reed Warbler does not migrate as far north as this; but Mr. Gurney has received a specimen from Natal; and if we may rely on the identification of specimens obtained by Mr. Hodgson, it ranges as far eastward as
I have sometimes heard persons express their inability to distinguish these two species apart; but there ought to be no difficulty in the matter. The Sedge Warbler has a variegated back, with a conspicuous light streak over the eye; the Reed Warbler has a uniform palebrown back, and the superciliary streak very faint. The actions of the two birds are not unlike, but their nesting habits are very different. S. phragmitis builds on the ground or very near it, making a nest of moss and grass, lined with horsehair, and laying five or six eggs of a yellowish-brown colour, with a few scattered spots or lines of a darker colour at the larger end. S. strepera suspends its nest between reed stems or twigs, round which a great portion of the nest is woven, and the entire structure is much larger, deeper, and more cup-shaped. The materials are long grasses, flowering reed-heads, and wool, the lining being composed of fine grass and hair. The eggs, five or six in number, are greenish-white speckled with ash-green and pale-brown. The habit which the Reed Warbler has of occasionally nesting at a distance from water is now probably well known to ornithologists. It was noticed by Mr. R. Mitford in the “Zoologist” for 1864 (p. 9io) and subsequently by the writer, in “The Birds of Middlesex,” 1866 (p. 47), and by the author of “The Birds of Berks and Bucks,” 1868 (p. 81). Mr. B. Hamilton Booth, of Malton, Yorkshire, communicated the fact of his having discovered a nest of the Reed Warbler in a yew tree, built so as to include three or four twigs as if they were reeds, and placed at a height of at least twelve or fourteen feet from the ground. He accounted for the nest being built at such a height, and in a tree, on the supposition that the first nest had been destroyed by the rats which infest the place, and the birds had taken a pre |
Aorta is an artery that is responsible for bringing oxygen from your heart to the other parts of your body. The aortic valve, on the other hand, is the valve in between the aorta and the heart. It has three leaflets, or cusps, that works for the regulation of blood flow. It close securely once the blood passes through so that it would not flow back into the heart.
Some people are born with the deformity in the heart wherein the valve consists of only two cusps, instead of three. Moreover, it does not work perfectly. It is called a bicuspid aortic valve. In rare cases, people are born with only one cusp or four cusps.
Bicuspid Aortic Valve
The bicuspid aortic valve is dangerous as it may cause the aortic valve in the heart to narrow. The narrowing of the valve may hinder the normal blood flow from the heart to other parts of the body. In some cases, the valve does not close securely enough that the blood flows back into the left ventricle.
There are some cases wherein a person with bicuspid aortic valve functions perfectly without any sign of a problem. But not until their adults. Some patients even reach elderly before experiencing the problem with having a bicuspid aortic valve.
Some patients with bicuspid aortic valve have an existing condition with their valves. While others face the problem in their enlarged aorta – the main blood pathway coming from the heart.
What are the Symptoms of Bicuspid Aortic Valve?
The bicuspid aortic valve is usually diagnosed in the adulthood stage of the patient. This is because, most of the time, the patient’s heart functions well and he/she do not show any signs of an existing BAVD.
The condition called stenosis is the primary reason for the symptoms of BAVD to occur. It happens when the calcium deposits around and on the leaflets narrow the valve. With this condition, the heart is forced to pump harder so that blood can make it through the valve. Eventually, it will produce a stenotic valve. The symptoms related to it are chest pain, dizziness, and difficulty breathing because of the lack of enough blood flow supplies to the brain.
In the case where the valve does not close up properly, the blood flows back into the heart. The heart then will have to pump the same blood, which causes strain in the left ventricle. This occurrence is called aortic valve insufficiency. Because the repetitive pumping of the same blood, the ventricle will either expand or dilate and may cause a patient to feel a shortness of breath when he/she walks up to stairs or other exertion.
To diagnose if you have a bicuspid aortic valve, your doctor will refer to the symptoms that you are experiencing. He or she may even look at your family medical history. A physical examination may be necessary such as using a stethoscope to hear your heart. A heart murmur usually indicates that you have a bicuspid aortic valve.
An echocardiogram is another tool that doctors use to identify bicuspid aortic valve existence. The equipment makes a video image of how your heart moves and the doctors study the test result to identify the problem with your heart chambers, aorta, aortic valve, and blood flow through your heart. Additional imaging test may be conducted by your doctor to fully diagnose your condition.
Other examinations that is not related to heart check-up may diagnose bicuspid aortic valve.
Are there Treatments for Bicuspid Aortic Valve?
People with bicuspid aortic valve needs regular monitoring to see any condition alteration. As people born with bicuspid aortic valve age, they will eventually develop a problem such as a stenosis, enlarged aorta, or aortic valve insufficiency which may require treatment. Depending on the treatment needed, a patient may undergo:
- Balloon Valvuloplasty – With this method, your surgeon will make an incision in your groin area. He/she will then insert a catheter that has a balloon on the end. The surgeon will guide the catheter until it reaches your aortic valve. There, he/she will inflate the balloon, which will cause your valve to expand. Once it happens, he/she will deflate and remove the balloon. The method is effective in treating stenosis among babies and children, but adults seem to have a problem with it. The valve narrows back over time.
- Aortic Valve Replacement – In this surgery, the damaged valve will be removed by your surgeon and will be replaced with either a mechanical valve or a biological tissue valve. Biological tissue valves are made from heart tissue of pig, cow, or human. It may even come from your own pulmonary valve. However, both types of aortic valve replacement have risks and you have to speak with your doctor about it. Mechanical valve replacement, for instance, requires the patient to take a blood-thinning medication for the rest of his/her life. The medication will prevent any blood clots to occur in the patient. Meanwhile, biological tissue valve replacement requires to be replaced over time as it degenerates.
- Aortic Root and Ascending Surgery – When an enlarged section of the aorta choked the heart, the surgeon have to eliminate the said enlarged section for the heart to breath. As the enlarged aorta is removed, a synthetic tube will be grafted in replace of it. During the procedure, a damaged aortic valve can also be repaired, but it can also remain depending on the case.
- Aortic Valve Repair – This method is not usually performed to treat a bicuspid aortic valve. But it is used to repair a damaged aortic valve. To do it, surgeons have two options. The first one is to isolate glued valve flaps. The second one is to either reshape or eradicate the extra valve tissue. This can make the flaps to close securely.
Once, you are detected with a bicuspid aortic valve you will have to find a lifelong caretaker in the form of a pediatric cardiologist if you are diagnosed while you are a child or congenital cardiologist for adults. You should follow regular check-ups to monitor your condition if there are some changes. |
A parallelogram refers to a four-sided figure that has two sets of parallel and congruent sides. For example, a square is a parallelogram. However, not all parallelograms are squares because parallelograms do not have to have four 90 degree angles. Since parallelograms are two-dimensional shapes, you can find the area but not the volume. To find the area, you need to know the base length and height of the parallelogram.
Select one pair of sides of the parallelogram as the base sides. It doesn't matter which pair of sides because both pairs of sides must be parallel and congruent.
Measure the distance between the two base sides to find the height of the parallelogram.
Measure the length of one of the base sides. It does not matter which side you measure because they are congruent so it will be the same length.
Multiply the base length times the height to find the area of the parallelogram. In this example, if the height equals 5 inches and the base equals 9 inches, multiply 5 by 9 to get an area of 45 square inches.
- Jupiterimages/BananaStock/Getty Images |
(Entomology Today) When it comes to flowers, the traits humans prefer – things like low pollen production, brighter colors, and changes to the height and shape of plants – are a mixed bag for pollinators. Researchers are now trying to understand what characteristics make ornamental plants attractive to pollinators. “I think this research is an important step to understanding how to design urban and suburban landscapes that are practical for humans and pollinators.”
(Universität Wien) Although several studies have documented that pollinators can impose strong selection pressures on flowers, our understanding of how flowers diversify remains fragmentary. For example, does the entire flower adapt to a pollinator, or do only some flower parts evolve to fit a pollinator while other flower parts may remain unchanged?
(Bowdoin) Nectar, the sweet reward that entices bees to visit flowers, is a complex substance made up of several ingredients, including sucrose, fructose, amino acids, yeasts—and toxic compounds that normally deter insects from eating plants. One researcher is exploring this contradiction and what it might mean for the health of bees.
(EurekAlert/American Society for Horticultural Science) “Establishing a season-long succession of flowers is critical in providing forage for pollinating insects throughout the growing season, which coincides with their life cycles. We observed pollinator activity on species of Crocus and Muscari from January to March, providing honey bees with pollen and nectar during normal times of severe food shortage.”
(Entomology Today) It turns out that bees defecate while foraging pollen or nectar, and sick bees may defecate more than usual, possibly transmitting infection through their fecal matter. So researchers set out to determine how important flower shape is to bee defecation patterns, with the hope that this data might help unravel the mysteries of disease transmission among bees.
(University of Kansas) Flowers depend on bees, birds and other pollinators to reproduce, and they can adapt strategically to attract these creatures – sometimes altering their traits so dramatically that they lure an altogether new pollinator. But not all such strategies are created equal. Researchers demonstrate that abandoning one pollinator for another to realize immediate benefits could compromise a flower’s long-term survival.
(Center for Biological Diversity) The U.S. Fish and Wildlife Service announced today it will consider Endangered Species Act protection for the Mojave poppy bee. Today’s positive finding comes in response to a petition filed in 2018 by the Center for Biological Diversity. Although it once thrived across much of the Mojave Desert, the quarter-inch-long, yellow-and-black bee is now only found in seven locations in Nevada’s Clark County. The bee is tightly linked to the survival of two rare desert poppy flowers. The bee has disappeared as those plants have declined, as well as facing ongoing threats from grazing, recreation and gypsum mining.
(Phys.org/Universitaet Mainz) Caffeine is a compound present in various plant species and is known to stimulate the central nervous system of honey bees as well as humans. Some plants add caffeine to their nectar with the aim of manipulating the activity of pollinators. However, caffeine does not appear to influence the behavior of a stingless bee that is a main pollinator of coffee plants. |
- Download, print and laminate, to use as a Table Top Resource for Students with Dyslexia
Dyslexic students will generally struggle to isolate, segment, blend and manipulate the phonemes (smallest units of spoken words) and map them to graphemes. These skills are essential for reading and spelling.
To help them we 'Code Map' words for them (a patented technique) as follows. They also 'Duck Hand' each word as they 'follow the sounds to say the word' to ensure that this is a multisensory experience :
We use Monster Mapping to SHOW them which speech sounds to use (each phoneme monster links to a phonetic symbol as used universally within the IPA)
We spend a few hours before transitioning to linking graphemes (phonics) using just the first 6 to build and 'read' over 20 words with just the monsters, to develop phonemic awareness (Phase 1)
We teach high frequency grapheme choices for all the English phonemes in the SSP systematic phonics teaching order.
We make sure they can see ALL spelling choices for each phoneme using the Spelling Clouds. Each Speech Sound Monster has their own
Spelling Cloud, showing example words for all spelling choices.
We offer these spelling choices as a downloadable Spelling Mat on Teachers pay Teachers ! |
The Problem: Water quality is affected by chemical run-off that makes it to the sea. The ocean, while it seems so vast, had long been a dumping ground for much of the waste produced on land. From solids to chemicals and even nuclear products, waste has been dumped in the ocean with the mindset that it will eventually disperse and become harmless. It is not harmless though, and can become even more concentrated and dangerous after entering the food chain. Dumping of most chemical waste was outlawed by the London Dumping Convention in 1972, and an amended treaty in 1996 further restricted what could be dumped in the ocean. Even when humans aren't directly dumping waste into the oceans though, toxic chemicals are still making their way to the sea. Fertilizers, pesticides and other products often make it to waterways through chemical run-off. These chemicals can seep into the soil and travel for long distances, eventually reaching the oceans where they can be carried by currents.
In coastal areas, fertilizer run-off is a huge problem. Too many nutrients in the fertilizer can cause eutrophication, or algae blooms that deplete the water's dissolved oxygen and suffocate other marine life. Eutrophication has created dead zones in different parts of the world, including the Gulf of Mexico and the Baltic Sea.
Another problem affecting water quality is Fibropapillomatosis which is a condition characterized by the presence of fibropapillomas, neoplasms consisting of both the epidermal and dermal skin layers. Fibropapillomatosis is a disease specific to sea turtles which has become increasingly prevalent in Green sea turtle populations around the world. It is predominantly seen in warmer regions such as the Caribbean, Hawaii, Australia, and Japan. It can also be associated with areas that have poor water turnover, particularly where human waste enters the waterways.
Species Affected: All species of sea turtles are affected by poor water quality by toxic chemicals. Fibropapillomatosis predominately affects juvenile green turtles.
The Solution: Education is important to solving marine pollution. The public can get involved in this issue by:
* Following local codes enforcing fertilizer bans near waterways;
* Using less chemical fertilizers, opting for natural compost instead;
* Buying organically produced food and products;
* Get informed about local waste disposal to ensure that untreated waste water isn't introduced to natural waterways and oceans.
Case Study: “Within the past decade, sea turtles in Florida Bay and the surrounding waters have shown signs of disease caused by toxic chemicals. Hundreds of turtles have been found dead or dying, some with massive tumor growths on their skin, covering their heads, necks and legs. It is thought that these growths are caused by a virus that attacked the turtles because of their lowered immunity system, which was in turn caused by pollution such as runoff from farmland or inadequate food when sea grasses died off. Some of these turtles have been saved by placing them in aquariums for months to build up their immune systems. Because of the increasing number of sick turtles seen, however, and the other problems that these sea turtles face, many scientists now believe that within 50 years, sea turtles will have disappeared from Southeastern waters.” (http://www.endangeredspecieshandbook.org/aquatic_toxic4.php)
* Problems: Ocean Pollution
* Endangered Species Handbook |
This article needs additional citations for verification. (April 2011) (Learn how and when to remove this template message)
Soundproofing is any means of reducing the sound pressure with respect to a specified sound source and receptor. There are several basic approaches to reducing sound: increasing the distance between source and receiver, using noise barriers to reflect or absorb the energy of the sound waves, using damping structures such as sound baffles, or using active antinoise sound generators.
Two distinct soundproofing problems may need to be considered when designing acoustic treatments - to improve the sound within a room (see reverberation), and reduce sound leakage to/from adjacent rooms or outdoors (see sound transmission class and sound reduction index). Acoustic quieting and noise control can be used to limit unwanted noise. Soundproofing can suppress unwanted indirect sound waves such as reflections that cause echoes and resonances that cause reverberation. Soundproofing can reduce the transmission of unwanted direct sound waves from the source to an involuntary listener through the use of distance and intervening objects in the sound path.
The energy density of sound waves decreases as they spread out, so that increasing the distance between the receiver and source results in a progressively lesser intensity of sound at the receiver. In a normal three-dimensional setting, with a point source and point receptor, the intensity of sound waves will be attenuated according to the inverse square of the distance from the source.
Damping means to reduce resonance in the room, by absorption or redirection (reflection or diffusion). Absorption will reduce the overall sound level, whereas redirection makes unwanted sound harmless or even beneficial by reducing coherence. Damping can reduce the acoustic resonance in the air, or mechanical resonance in the structure of the room itself or things in the room.
Absorbing sound spontaneously converts part of the sound energy to a very small amount of heat in the intervening object (the absorbing material), rather than sound being transmitted or reflected. There are several ways in which a material can absorb sound. The choice of sound absorbing material will be determined by the frequency distribution of noise to be absorbed and the acoustic absorption profile required
Porous absorbers, typically open cell rubber foams or melamine sponges, absorb noise by friction within the cell structure. Porous open cell foams are highly effective noise absorbers across a broad range of medium-high frequencies. Performance can be less impressive at lower frequencies.
The exact absorption profile of a porous open cell foam will be determined by a number of factors including the following:
- Cell size
- Material thickness
- Material density
Resonant panels, Helmholtz resonators and other resonant absorbers work by damping a sound wave as they reflect it. Unlike porous absorbers, resonant absorbers are most effective at low-medium frequencies and the absorption of resonant absorbers is always matched to a narrow frequency range.
When sound waves hit a medium, the reflection of that sound is dependent on dissimilarity of the surfaces it comes in contact with. Sound hitting a concrete surface will result in a much different reflection than if sound were to hit a softer medium such as fiberglass. In an outdoor environment such as highway engineering, embankments or panelling are often used to reflect sound upwards into the sky.
If a specular reflection from a hard flat surface is giving a problematic echo then an acoustic diffuser may be applied to the surface. It will scatter sound in all directions. This is effective to eliminate pockets of noise in a room.
Room within a room
A room within a room (RWAR) is one method of isolating sound and preventing it from transmitting to the outside world where it may be undesirable.
Most vibration / sound transfer from a room to the outside occurs through mechanical means. The vibration passes directly through the brick, woodwork and other solid structural elements. When it meets with an element such as a wall, ceiling, floor or window, which acts as a sounding board, the vibration is amplified and heard in the second space. A mechanical transmission is much faster, more efficient and may be more readily amplified than an airborne transmission of the same initial strength.
The use of acoustic foam and other absorbent means is less effective against this transmitted vibration. The user is advised to break the connection between the room that contains the noise source and the outside world. This is called acoustic decoupling. Ideal decoupling involves eliminating vibration transfer in both solid materials and in the air, so air-flow into the room is often controlled. This has safety implications: inside decoupled space, proper ventilation must be assured, and gas heaters cannot be used.
Noise cancellation generators for active noise control are a relatively modern innovation. A microphone is used to pick up the sound that is then analyzed by a computer; then, sound waves with opposite polarity (180° phase at all frequencies) are output through a speaker, causing destructive interference and cancelling much of the noise.
Residential soundproofing aims to decrease or eliminate the effects of exterior noise. The main focus of residential soundproofing in existing structures is the windows and doors. Solid wood doors are a better sound barrier than hollow doors. Curtains can be used to damp sound either through use of heavy materials or through the use of air chambers known as honeycombs. Single-, double- and triple-honeycomb designs achieve relatively greater degrees of sound damping. The primary soundproofing limit of curtains is the lack of a seal at the edge of the curtain, although this may be alleviated with the use of sealing features, such as hook and loop fastener, adhesive, magnets, or other materials. Thickness of glass will play a role when diagnosing sound leakage. Double-pane windows achieve somewhat greater sound damping than single-pane windows when well sealed into the opening of the window frame and wall.
Significant noise reduction can also be achieved by installing a second interior window. In this case the exterior window remains in place while a slider or hung window is installed within the same wall openings.
In the USA the FAA offers soundproofing for homes that fall within a noise contour where the average decibel level is 65 decibels. It is part of their Residential Sound Insulation Program. The program provides Solid-core wood entry doors plus windows and storm doors.
Restaurants, schools, office businesses, and health care facilities use architectural acoustics to reduce noise for their customers. In the US, OSHA has requirements regulating the length of exposure of workers to certain levels of noise.
Commercial businesses sometimes use soundproofing technology, especially when they are an open office design. There are many reasons why a business might implement soundproofing for their office. One of the biggest hindrances in worker productivity are the distracting noises that comes from people talking such as on the phone, or with their co-workers and boss. Noise soundproofing is important in mitigating people from losing their concentration and focus from their work project. It is also important to keep confidential conversations secure to the intended listeners.
When trying to find places to install soundproofing, acoustic panels should be installed in office areas where lots of traffic corridors, circulation pathways, and open work areas are connected. Successful acoustic panel installations rely on three strategies and techniques to absorb sound, block sound transmission from one place to another, and cover and masking of the sound.
Automotive soundproofing aims to decrease or eliminate the effects of exterior noise, primarily engine, exhaust and tire noise across a wide frequency range. When constructing a vehicle which includes soundproofing, a panel damping material is fitted which reduces the vibration of the vehicle's body panels when they are excited by one of the many high energy sound sources caused when the vehicle is in use. There are many complex noises created within vehicles which change with the driving environment and speed at which the vehicle travels. Significant noise reductions of up to 8 dB can be achieved by installing a combination of different types of materials.
The automotive environment limits the thickness of materials that can be used, but combinations of dampers, barriers, and absorbers are common. Common materials include felt, foam, polyester, and Polypropylene blend materials. Waterproofing may be necessary based on materials used. Acoustic foam can be applied in different areas of a vehicle during manufacture to reduce cabin noise. Foams also have cost and performance advantages in installation since foam material can expand and fill cavities after application and also prevent leaks and some gases from entering the vehicle. Vehicle soundproofing can reduce wind, engine, road, and tire noise. Vehicle soundproofing can reduce sound inside a vehicle from five to 20 decibels.
Noise barriers as exterior soundproofing
Since the early 1970s, it has become common practice in the United States and other industrialized countries to engineer noise barriers along major highways to protect adjacent residents from intruding roadway noise. The Federal Highway Administration (FHWA) in conjunction with State Highway Administration (SHA) adopted Federal Regulation (23 CFR 772) requiring each state to adopt their own policy in regards to abatement of highway traffic noise. Engineering techniques have been developed to predict an effective geometry for the noise barrier design in a particular real world situation. Noise barriers may be constructed of wood, masonry, earth or a combination thereof. One of the earliest noise barrier designs was in Arlington, Virginia adjacent to Interstate 66, stemming from interests expressed by the Arlington Coalition on Transportation. Possibly the earliest scientifically designed and published noise barrier construction was in Los Altos, California in 1970.
- Acoustic board
- Acoustic foam
- Acoustic quieting
- Acoustic transmission
- Anechoic chamber
- Hearing test
- Noise barrier
- Noise control
- Noise mitigation
- Noise pollution
- Noise regulation
- Noise, vibration, and harshness
- Recording studio
- Room modes
- Sound masking
- Sound transmission class
- Buffer (disambiguation)
- Damped wave
- Damper (disambiguation)
- "Acoustics and sound insulation". zoomit.
- Cox, Trevor J.; D'Antonio, Peter (2009-01-26). Acoustic absorbers and diffusers. ISBN 9780203893050.
- "Low frequency absorption". Studio tips.
- "Reflection, Refraction, and Diffraction". www.physicsclassroom.com. Retrieved 2017-07-10.
- "Reduce Noise in your Home | Sound Control". soundcontroltech.com. Retrieved 2017-02-05.
- "How to sound-proof your noisy apartment". Stuff. Retrieved 2017-02-05.
- "Noise Control in Multi-Family Residential Buildings". Acoustical Surfaces. Retrieved 2017-07-10.
- Camtion, Eros. "What is more effective between active noise cancellation and soundproofing ?". berkeley.edu.
- Wisniewski, Mary. "City wants more Midway-area homeowners to sign up for soundproofing". chicagotribune.com. Retrieved 2017-02-05.
- "U.S. Standards on Workplace Noise Trail Those of Other Countries". New York Times.
- "Acoustic privacy considerations for open plan offices". Atkar. Retrieved 24 June 2016.
- Neale, Paul. "How to reduce car noise". Car Insulation UK. CIUK. Retrieved 12 February 2015.
- Neale, Paul. "Mr". Car Insulation UK. CIUK.
- "Introduction to Car Audio: How to Tame That Road Noise". Secrets of Car Audio.
- UK Thinsulate literature.
- "DOW Automotive Systems: BETAFOAM™ acoustic foams" (PDF). dow.com. Retrieved 2017-05-26.
- "Sound Barriers Guidelines - Highway Traffic Noise". roads.maryland.gov. Retrieved 2017-07-10. |
Transit: The GPS Forefather
Transit: The GPS Forefather
Before there was GPS, there was the Navy navigation satellite system called Transit. Development began in 1958 at the Johns Hopkins University Applied Physics Laboratory; it was declared operational in 1964 and continued until 1996. The satellites were tracked by a series of ground stations and a command center that operated the satellites and generated their navigation messages.
Transit operated on a Doppler ranging principle. Motion of the satellite relative to the user produced a Doppler frequency shift in the satellite signal received. The user’s receiver generated a reference signal that was compared with the received signal by generating the difference frequency between the two. The receiver counted the number of cycles of the difference frequency over an interval (often about 23 seconds) to form the “Doppler count.” Essentially, the Doppler count was a measure of the change in the slant range (distance) of the satellite and the user between the start and end of the interval. In practice, a position fix would use several successive Doppler counts to compute the user position.
One of the strengths of Transit was that it required a nominal constellation of only four satellites, because a position fix required only one satellite at a time; in practice, the constellation generally had between four and six satellites. These were in circular, polar, low Earth orbits (about 1075-kilometer altitude), which ensured good Doppler shifts and reduced the required broadcast power and the size of the required launch vehicle (Scout). It was a system with an unlimited number of passive users anywhere in the world, and it could operate in essentially all weather conditions. But the Doppler principle also meant that a position fix could take 30 minutes to compute, and any motion of the receiver (especially for airborne users) complicated the position calculation. It was generally considered only a 2-D system (latitude and longitude), and it was noncontinuous in many areas (since 30 minutes might elapse before the next satellite came into view).
In contrast, the GPS system is based on determining the range between the user and a GPS satellite; the user essentially computes the time required for the satellite signal to reach the receiver. Range measurements to four GPS satellites allow the user’s receiver to compute its 3-D position and correct for errors in its internal clock. GPS leveraged the lessons of the Navy’s Transit and Timation (time navigation—passive ranging by measuring the time difference between electronic clocks located within the satellite and in the navigator’s receiver). An atomic frequency standard provides a stable, precise signal, whose timing is synchronized across the constellation. The code-division, multiple-access technique allows all satellites to transmit on the same center frequency. An additional navigation signal on a second frequency allows the user to correct for ranging errors introduced as the signals pass through the ionosphere.
This precise ranging to four GPS satellites enables rapid and accurate real-time positioning for dynamic users, which addresses a shortcoming of the earlier systems. But it comes at a price: If you want continuous, real-time positioning anywhere at any time, four satellites need to be in continuous view. Moreover, they need to have “good geometry” to ensure an accurate solution and be deployed in stable, predictable orbits. These conditions led GPS to its constellation of 24 to 30 satellites. The Russian GLONASS system and the European Union’s Galileo system each define their own constellation, but are about the same size range when fully populated.
Back to the Spring 2010 Table of Contents
Go to the main article: What GPS Might Have Been—and What It Could Become
To sidebar: A Global Standard |
Lids and lacrimal system
Eye lids are movable folds which act as shutters protecting eye from external
environment. It regulates entry of light into the eye along with pupil. Whenever
any object come towards the eye, eyelid reflexly closes and protects the eyeball.
In some diseases eye lid particularly affecting its nerve supply may not close
perfectly during sleep in which case because of exposure permanent damage occur
to cornea and conjunctiva. Eye lid helps to keep the cornea moist.
DESCRIPTION OF EYE LIDS
Anatomy of Eye Lid
There are two eye lids-upper eye lid and lower eye lid in each eye.
Layers of the Eye Lid
From anterior to posterior
2) Muscular Layer
3) Fibrous Layer- Tarsal plate
4) Mucous membrane Layer- Conjunctiva
Skin of the Eyelid
Skin of the lid is the thinnest in the body. It is less than lmm thicknss.
It is almost transparent. It is loosely attached to underlying structures. At the margins of the eye lid skin layer ends. There is no subcutaneous fat in eye lid. Skin of the lid contain sweat gland and sebaceous gland.
MuscIes present in eye lid include:
1) Orbicularis oculi
2) Levator palpebral superioris
3) Muller's muscle
Fibrous Layer Parts
2) Orbid septum
Tarsus is a thickened layer and this gives strength to the lids. It does not contain any cartilage or bone. It is a thickened fibrous tissue. The height of tarsus is about lcm in, upper lid and about 5 rnm in the lower lid. Its thickness is about l=mm.
Main function of tarsus is to give structural support to eye lid.
This is a membrane like structure which arises from the superior and inferior orbital margin. It is a flexible structure and follows the movement of the lid.
Structures piercing orbital septum includes:
Lacrimal vessels and nerves
Supra-orbital vessels and nerves
Supratrochlear nerve and artery
Palpebral artery and levator palpebral superioris muscle.
The opened lids enclose an elliptical opening between their margins end is called palpebral fissure. The palpebral fissure meet at the medial and lateral part which are known as lateral angles or canthi
There are two canthi:
1) Medial canthus
2) Lateral canthus
Mucous Membrane Layer (Conjunctiva)
Conjunctiva is a mucous membrane and cover the posterior aspect of the lid.
Blood Supply of Lids derived from
1) Ophthalmic Artery
2) Lacrimal Artery
Veins of the lids drain into ophthalmic veins.
Medial side of the lids goes to submandibular lymph nodes and lateral side drains into preauricular lymph nodes.
There are two types of nerve supply- motor and sensory.
Motor: Orbicularis muscle supplied by Facial Nerve.
Levator palpebral superior is by the 'oculomotor Nerves.
Upper Lids: Supra Orbital Nerve
Infra and Supra Trochleor Nerve
Lower Lids: Infraorbital Nerve
Eye is constantly exposed to the external enviornment.'Because of constant
exposure to different types of environment there is danger to the external structures of the eye like cornea and conjunctiva. Even microorganisms in the
environment can cause infection frequently. So as to avoid such bad external,
effects on eye, a fluid layer cover the conjunctival and cornea which is known as tear film. The system of formation and drain of the tear film from the eye is
known as lacrimal apparatus.
Lacrimal apparatus coilsists of two parts:
1) Tear production system
2) Tear drainage system
Tear Production System
Tear film is produced by lacrimal glands. There are two types of lacrimal glands:
1) Main lacrimal glands
2) Accessory lacrimal glands
Main Lacrimal Glands
Situation: Main Laciimal gland is situated at the supratemporal aspect of the orbit.
It has 6 to 12 ducts which drains into superior fornix.
Histologically lacrimal gland is a tubuloacinar gland.
Function: Function of main lacrimal gland is to produce tear under reflex
conditions. Whenever injury occurs to the external eye or any infection occur, tear is produced by the main lacrimal gland. This type of secretion is also known as reflex secretions.
Accessory Lacrimal Glands
There are two typcs of accessory Lacrimal glancls:
1) Accessory lacrimal gland of Krause
2) Accessory lacrimal glands of Wolfring
Accessory lacrimal glands produce normal tear under basal conditions. Even if
there is no stimulus, under normal conditions certain amount of tear secretion
occurs. These tear secretions are caused by these type of glands.
Deficiency tear secretion by these glands leads to a condition known as dry eye.
lacrimal Drainage System
Lacrimal fluid is ultimately drained into the nose. The path through which
lacrimal fluid passes is known as lacrimal drainage system.
Parts of Lacrimal Drainage System
3) Common canaliculus
4) Lacrimal Sac
5) Naso-Lacrimal duct
Punctum: It is an opening at the medial part of the eye lid margin. It is situated at about 6 mm from the medial canthus. Under normal circumstances punctum is not visible. Only when eye lid is pulled outwards punctum is visible. Tear film first enters the punctum and then goes to canaliculus
Lacrimal Canaliculus: Each eye lid has one canaliculus.
There Elre two parts of Canaliculus:
1) Vertical Portion
2) Horizontal Portion
Vertical Portion: It is about 2 im long. It starts from the punctum. It bends medially at almost 90' to become continuous as horizontal canaliculus. At the angle between vertical and horizontal part there is a dilation which is known as ampulla.
Horizontal Portion: It is about 8 mm. Upper canaliculus and lower canaliculus unite to form comnion canaliculus.
Common Canaliculus: Common canaliculus is formed by the joining of upper and
lower canaliculi. It drains into the laciimal sac.
Epithelium of canoliculus: Stratified squamous epithelium.
Lacrimal Sac: Lacrimal sac is situated at the medial and inferior wall of the orbit in a shallow depression which is called lacrimal fossa. It acts as a reservoir for lacrimal fluid. Lacrimal sac drains into the naso-lacrimal duct. The sac, closed above and open below, is continuous with nasolacrimal 'duct.
Parts of the lacrimal Sac:
1) Fundus, 2)Body
Naso-lacrimal duct is a downward continuation of laciimal sac.
It drains into the inferior meatus of the nose. This part of the lacrimal drainage system is more prone to damage because of its proximity to the nasal cavity.
Length: 15 mm.
Situation: Situated within a canal foinied mainly by maxilla.
Opening: It opens into the inferior meatus of nose.
Valves of Hasner: Situated at the opening of the nasolacimal duct
Did you like this resource? Share it with your friends and show your love!
Responses to "
Lids and lacrimal system
No responses found. Be the first to respond...
Notify me by email when others post comments to this article.
Do not include your name, "with regards" etc in the comment. Write detailed comment, relevant to the topic.
No HTML formatting and links to other web sites are allowed.
This is a strictly moderated site. Absolutely no spam allowed.
to fill automatically.
(Will not be published, but
to validate comment)
Type the numbers and letters shown on the left.
Win an apple watch by simply hitting any or all of the action items below!
Techulator Polls & Giveaways
Subscribe to Email
Get Jobs by Email
Forum posts by Email
Articles by Email
Awards & Gifts
Last 7 Days
j r kumar
ISC Technologies, Kochi - India. © All Rights Reserved. |
Algebra is a part of mathematics (often called math in the United States and maths in the United Kingdom ). It uses variables to represent a value that is not yet known. When an equals sign (=) is used, this is called an equation. A very simple equation using a variable is: 2 + 3 = x In this example, x = 5, or it could also be said, "x is five". This is called solving for x.
Algebra can be used to solve real problems because the rules of algebra work in real life and numbers can be used to represent the values of real things. Physics, engineering and computer programming are areas that use algebra all the time. It is also useful to know in surveying, construction and business, especially accounting.
People who do algebra need to know the rules of numbers and mathematic operations used on numbers, starting with adding, subtracting, multiplying, and dividing. More advanced operations involve exponents, starting with squares and square roots. Many of these rules can also be used on the variables, and this is where it starts to get interesting.
Algebra was first used to solve equations and inequalities. Two examples are linear equations (the equation of a line, y=mx+b) and quadratic equations, which has variables that are squared (power of two, a number that is multiplied by itself, for example: 2*2, 3*3, x*x). How to factor polynomials is needed for quadratic equations.
- 1 History
- 2 Examples
- 3 Writing algebra
- 4 Functions and Graphs
- 5 Rules of algebra
- 5.1 Rules
- 5.1.1 Commutative property of addition
- 5.1.2 Commutative property of multiplication
- 5.1.3 Associative property of addition
- 5.1.4 Associative property of multiplication
- 5.1.5 Distributive property
- 5.1.6 Additive identity property
- 5.1.7 Multiplicative identity property
- 5.1.8 Additive inverse property
- 5.1.9 Multiplicative inverse property
- 5.1 Rules
- 6 Advanced Algebra
- 7 Related pages
- 8 References
- 9 Other websites
History[change | change source]
Early forms of algebra were developed by the Babylonians and the Greeks. However the word "algebra" is a Latin form of the Arabic word Al-Jabr ("casting") and comes from a mathematics book Al-Maqala fi Hisab-al Jabr wa-al-Muqabilah, ("Essay on the Computation of Casting and Equation") written in the 9th century by a Persian mathematician, Muhammad ibn Mūsā al-Khwārizmī, who was a Muslim born in Khwarizm in Uzbekistan. He flourished under Al-Ma'moun in Baghdad, Iraq through 813-833 AD, and died around 840 AD. The book was brought into Europe and translated into Latin in the 12th century. The book was then given the name 'Algebra'. (The ending of the mathematician's name, al-Khwarizmi, was changed into a word easier to say in Latin, and became the English word algorithm.)
Examples[change | change source]
Here is a simple example of an algebra problem:
- Sue has 12 candies, Ann has 24 candies. They decide to share so that they have the same number of candies.
These are the steps you can use to solve the problem:
- To have the same number of candies, Ann has to give some to Sue. Let x represent the number of candies Ann gives to Sue.
- Sue's candies, plus x, must be the same as Ann's candies minus x. This is written as: 12 + x = 24 - x
- Subtract 12 from both sides of the equation. This gives: x = 12 - x. (What happens on one side of the equals sign must happen on the other side too, for the equation to still be true. So in this case when 12 was subtracted from both sides, there was a middle step of 12 + x - 12 = 24 - x - 12. After a person is comfortable with this, the middle step is not written down.)
- Add x to both sides of the equation. This gives: 2x = 12
- Divide both sides of the equation by 2. This gives x = 6. The answer is six. If Ann gives Sue 6 candies, they will have the same number of candies.
- To check this, put 6 back into the original equation wherever x was: 12 + 6 = 24 - 6
- This gives 18=18, which is true. They both now have 18 candies.
With practice, algebra can be used when faced with a problem that is too hard to solve any other way. Problems such as building a freeway, designing a cell phone, or finding the cure for a disease all require algebra.
Writing algebra[change | change source]
As in most parts of mathematics, adding z to y (or y plus z) is written as y + z. Subtracting z from y (or y minus z) is written as y − z. Dividing y by z (or y over z: ) is written as y ÷ z or y/z. y/z is more commonly used.
In algebra, multiplying y by z (or y times z) can be written in 4 ways: y × z, y * z, y·z, or just yz. The multiplication symbol "×" is usually not used, because it looks too much like the letter x, which is often used as a variable. Also, when multiplying a larger expression, parentheses can be used: y (z+1).
When we multiply a number and a letter in algebra, we write the number in front of the letter: 5 × y = 5y. When the number is 1, then the 1 is not written because 1 times any number is that number (1 × y = y) and so it is not needed.
Functions and Graphs[change | change source]
An important part of algebra is the study of functions, since functions often appear in equations that we are trying to solve. A function is like a box you can put a number or numbers into and get a certain number out. When using functions, graphs can be powerful tools in helping us to study the solutions to equations.
A graph is a picture that shows all the values of the variables that make the equation or inequality true. Usually this is easy to make when there are only one or two variables. The graph is often a line, and if the line does not bend or go straight up-and-down it can be described by the basic formula y = mx + b. The variable b is the y-intercept of the graph (where the line crosses the vertical axis) and m is the slope or steepness of the line. This formula applies to the coordinates of a graph, where each point on the line is written (x, y).
In some math problems like the equation for a line, there can be more than one variable (x and y in this case). To find points on the line, one variable is changed. The variable that is changed is called the "independent" variable. Then the math is done to make a number. The number that is made is called the "dependent" variable. Most of the time the independent variable is written as x and the dependent variable is written as y, for example, in y = 3x + 1. This is often put on a graph, using an x axis (going left and right) and a y axis (going up and down). It can also be written in function form: f(x) = 3x + 1. So in this example, we could put in 5 for x and get y = 16. Put in 2 for x would get y=7. And 0 for x would get y=1. So there would be a line going thru the points (5,16), (2,7), and (0,1) as seen in the graph to the right.
If x has a power of 1, it is a straight line. If it is squared or some other power, it will be curved. If it uses an inequality (< or >), then usually part of the graph is shaded, either above or below the line.
Rules of algebra[change | change source]
In algebra, there are a few rules that can be used for further understanding of equations. These are called the rules of algebra. While these rules may seem senseless or obvious, it is wise to understand that these properties do not hold throughout all branches of mathematics. Therefore, it will be useful to know how these axiomatic rules are declared, before taking them for granted. Before going on to the rules, reflect on two definitions that will be given.
- Opposite - the opposite of is .
- Reciprocal - the reciprocal of is .
Rules[change | change source]
Commutative property of addition[change | change source]
'Commutative' means that a function has the same result if the numbers are swapped around. In other words, the order of the terms in an equation do not matter. When the operator of two terms is an addition, the 'commutative property of addition' is applicable. In algebraic terms, this gives .
Note that this does not apply for subtraction! (i.e. )
Commutative property of multiplication[change | change source]
When the operator of two terms is an multiplication, the 'commutative property of multiplication' is applicable. In algebraic terms, this gives .
Note that this does not apply for division! (i.e. , when )
Associative property of addition[change | change source]
'Associative' refers to the grouping of numbers. The associative property of addition implies that, when adding three or more terms, it doesn't matter how these terms are grouped. Algebraically, this gives . Note that this does not hold for subtraction, e.g. (see the distributive property).
Associative property of multiplication[change | change source]
The associative property of multiplication implies that, when multiplying three or more terms, it doesn't matter how these terms are grouped. Algebraically, this gives . Note that this does not hold for division, e.g. .
Distributive property[change | change source]
The distributive property states that the multiplication of a number by another term can be distributed. For instance: . (Do not confuse this with the associative properties! For instance, .)
Additive identity property[change | change source]
'Identity' refers to the property of a number that it is equal to itself. In other words, there exists an operation of two numbers so that it equals the variable of the sum. The additive identity property states that the sum of any number and 0 is that number: . This also holds for subtraction: .
Multiplicative identity property[change | change source]
The multiplicative identity property states that the product of any number and 1 is that number: . This also holds for division: .
Additive inverse property[change | change source]
The additive inverse property is somewhat like the opposite of the additive identity property. When an operation is the sum of a number and its opposite, and it equals 0, that operation is a valid algebraic operation. Algebraically, it states the following: . Additive inverse of 1 is (-1).
Multiplicative inverse property[change | change source]
The multiplicative inverse property entails that when an operation is the product of a number and its reciprocal, and it equals 1, that operation is a valid algebraic operation. Algebraically, it states the following: . Multiplicative inverse of 2 is 1/2.
Advanced Algebra[change | change source]
In addition to "elementary algebra", or basic algebra, there are advanced forms of algebra, taught in colleges and universities, such as abstract algebra, linear algebra, and universal algebra. This includes how to use a matrix to solve many linear equations at once. Abstract algebra is the study of things that are found in equations, going beyond numbers to the more abstract with groups of numbers.
Many math problems are about physics and engineering. In many of these physics problems time is a variable. Time uses the letter t. Using the basic ideas in algebra can help reduce a math problem to its simplest form making it easier to solve difficult problems. Energy is e, force is f, mass is m, acceleration is a and speed of light is sometimes c. This is used in some famous equations, like f = ma and e=mc^2 (although more complex math beyond algebra was needed to come up with that last equation).
Related pages[change | change source]
References[change | change source]
- "Math or Maths". Wolfram.com. http://mathworld.wolfram.com/Mathematics.html. Retrieved 11 April 2013.
- "Algebra Introduction". Math Is Fun. http://www.mathsisfun.com/algebra/introduction.html. Retrieved 11 April 2013.
- "History of Algebra". UCS Louisiana. http://www.ucs.louisiana.edu/~sxw8045/history.htm. Retrieved 11 April 2013.
Other websites[change | change source]
|The Simple English Wiktionary has a definition for: algebra.|
|The English Wikibooks has more information on:| |
Updated on 01/13/2015
In this activity, students will use basic concepts of perimeter and area to investigate a classic problem situation requiring maximization. They must decide how to build a garden fence to enclose the largest possible area.
Students will find the possible lengths and enter the expression to find the area.
Then, students will find which pair shows the maximum area and use the scatter plot to display the relationship between two sets of paired data (width and area). |
The Excel IRR function has the syntax shown below.
Values = A range of cash flows
Guess = Initial guess at IRR
Use of the Excel IRR Function
The Excel IRR function is used to calculate the internal rate of return of a series of cash flows. The internal rate of return is the discount rate which produces a net present value of zero.
The cash flows can be unequal but must occur at the end of each period.
The Excel IRR function uses an iterative (trial and error) process to find the internal rate of return. If the argument Guess is omitted, then the Excel IRR function assumes a value of 10%, and in most cases the function will find a solution. If a solution is not found then entering a different guess may help the function to produce an answer.
As the use of the Excel IRR function relies on finding the discount rate which produces a net present value of zero, the cash flows must contain at least one positive and one negative value.
Internal Rate of Return of an Investment Project
The Excel IRR function can be used to calculate the internal rate of return of an investment project.
Suppose for example, a business plans to invest 2,300 today (start of year 1) and will receive cash flows of 1800, 600, and 300 at the end of each of the following 3 years. As numeric values cannot be entered directly into the Excel IRR function, a spreadsheet needs to be set up as shown below.
In this example spreadsheet, the variable arguments are entered in cells B5 to B10, and the Excel IRR function is entered at cell B13 as =IRR(B5:B8,B10).
It should be noted that the original investment is negative as it represents a cash flow out.
By changing any of the variables in B5 to B10, the internal rate of return of the cash flows can be recalculated without having to enter the Excel IRR function each time. It should be noted that if a cash flow is zero it has to be entered as zero (0), it cannot be left blank otherwise the Excel IRR function will return the wrong answer.
In this example the internal rate of return of the project is given by the Excel IRR function as 11.96%.
Excel IRR Function when Cash Flows Change Sign
To produce a meaningful answer, the Excel IRR function is expecting an initial investment (negative cash flow) followed by a series of cash receipts (positive cash flows). While is will provide solutions if the cash flows change sign multiple times throughout the period of the investment, they may or may not be meaningful solutions.
Consider the following example cash flows -200, 3000, -2,900, -400. The cash flows change sign throughout the term of the project. The Excel IRR function will produce a solution at 17.20% and 1,295.03% depending on where the initial guess is set.
Both answers are mathematically correct in that they produce a net present value of zero. However, from an investment point of view, if we ignore discounting and simply add the cash flows, they total -500. Whatever the discount rate, it cannot be possible to lose money but have a positive rate of return, the answers are not meaningful and should not be used.
The Excel IRR function is one of many Excel financial functions used in time value of money calculations, discover another at the links below. |
A telescope nearly the size of planet Earth is needed to capture the image of a black hole. How do the Event Horizon Telescope project and the new Chirp algorithm attempt to solve this problem? ( ESA/Getty Images )
Click for a full size image
New Computer Algorithm May Help Astronomers Capture First Black Hole Images
Astronomers are already capable of measuring the mass of a black hole, but up to now, they have not yet produced images of one. MIT graduate Katie Bouman and colleagues, however, have developed a computer algorithm that may eventually change that.
Big telescopes are needed to observe black holes because those space regions are relatively small. The supermassive black hole that lies at the center of the Milky Way, for instance, is only about 17 times the diameter of the solar system’s sun. It is also too far at about 25,000 light years away.
Capturing the picture of a black hole would not be a problem if there was a telescope with a 10,000-kilometer diameter. However, that is almost the same diameter as that of the Earth so, it isn’t possible.
The Event Horizon Telescope project attempts to solve this dilemma by simultaneously taking data from different telescopes scattered around the globe. Through a method known as Very Long Baseline Interferometry, or VLBI, scientists can merge this data in a way that it seems to be looking at one gigantic telescope.
Scientists, though, still have other obstacles. The number of telescopes involved in the project is not enough, leaving large gaps in data.
The telescopes also use radio wavelengths, and radio waves do not produce good pictures. The largest radio-telescope dish, for instance, produces an image of the moon that is blurrier than one seen through an ordinary backyard optical telescope.
The Earth’s atmosphere can likewise slow down radio waves, which can cause massive differences in arrival times and result in many errors.
The new algorithm called Chirp, for Continuous High-resolution Image Reconstruction using Patch priors, is meant to solve this problem.
The algorithm can mathematically enhance the radio waves captured by the network of telescopes. It can filter out unnecessary data, such as atmospheric noise, to produce a more reliable image.
As for the sparse data, the algorithm uses other images from space to serve as a reference so gaps in data can be filled. This helps produce a sort of mosaic that matches the VLBI data.
“This research aims to overcome this gap in several ways: careful modeling of the sensing process, cutting-edge derivation of a prior-image model, and a tool to help future researchers test new methods,” said Yoav Schechner, from Israel’s Technion.
“[The researchers] mathematically merge into a single optimization formulation a very different, complex sensing process and a learning-based image-prior model.”
source : Tech Times |
Flexible DNA Computer Finds Square Roots - Science News: The basic design incorporates two types of synthetic DNA in a test tube: single-stranded DNA molecules that float free like lone wolves and double-stranded ones that carry a small notch of open DNA called a “toehold.” The single-stranded DNA cruises solo until bumping into an entwined pair of DNA strands with a matching toehold. The lone wolves anchor onto that toehold by zipping, eventually booting off one of the original two strands. After the zipping and unzipping is done, a new double-stranded molecule and single-stranded lone wolf float around the test tube.
By precisely designing these DNA cascades, the team could squirt molecules representing 1001 in binary notation, or 9, into the mix and isolate a binary answer once the resulting reactions finished. In this case, that answer was a square root: binary 11, or 3. But the cascades, like the model trains, are customizable, and the team could have easily designed a circuit to do addition or subtraction instead. “It’s the simplicity that enables the complexity,” says Winfree, a bioengineer. |
In this blended lesson supporting literacy skills, students learn about the debate over slavery at the Constitutional Convention in 1787. Students develop their literacy skills as they explore a social studies focus on the changing perception of slavery in the new United States and the ways in which the debate over slavery affected the content of the Constitution. During this process, they read informational text, learn and practice vocabulary words, and explore content through videos and interactive activities. This resource is part of the Inspiring Middle School Literacy Collection.
Students need to be signed in to complete this lesson. Go to "About This Activity" in "Support Materials" below or click here.
To access this lesson, please log into your PBS LearningMedia account. |
Along the walls of Oceanographer Canyon, fish dart in and out of colorful anemone gardens and sea creatures send up plumes of sand and mud as they burrow. Bill Ryan, an oceanographer at Columbia University’s Lamont-Doherty Earth Observatory, watched the scenes through the windows of a mini research submarine in 1978 as he became one of the few people to explore the seafloor canyons that President Obama has now designated a national monument.
Ryan traveled more than a mile below the ocean surface to take samples in an effort to determine the geological history of the canyons in Georges Bank, a section of the continental slope off New England that has been home to some of the Atlantic Ocean’s most productive fisheries.
In the audio interview above, he talks about what he and his team saw and learned as they explored the canyons aboard the research submersible Alvin. Their findings led to breakthroughs in our understanding of the region’s petroleum potential and how continental margins form when continents split apart.
“When you’re down in these canyons with 6,000 to 9,000 feet of relief, underwater, it’s like being in a raft in the Grand Canyon looking up,” Ryan recalled. “We would fly with the Alvin through these gorgeous gardens but rarely see an outcrop of bedrock, which we wanted to sample because that could tell us about the age of the canyons.”
The trick to studying the rock layers, he learned from European colleagues who study terrestrial canyons to learn about the formation of mountain chains, was to follow the river at the bottom of the canyon.
In previous expeditions funded by the U.S. Navy, Lamont’s Bruce Heezen had discovered how tidal currents flow through the canyons, almost like rivers, pumping nutrients from the deep ocean up through the canyons to feed the fisheries of Georges Bank. Following the canyon floor was a torturous route, but that tidal current had kept sediment from building up, revealing the layers of sandstones, siltstones and chalk that Ryan was looking for.
“In sampling those and looking at the fossils within them, we realized that we were back 120 million years and traversing up through time with a complete geological history of the evolution of Georges Bank all there, just like you see all the layers at the edge of the Grand Canyon,” Ryan said.
Clinging to the hard rocks were vibrant corals that have since provided more clues to the history of the region. The deep-sea corals live for hundreds of years and record in their annual growth rings changes in temperature and salinity of the deep, abyssal ocean, creating a rare and valuable record.
“The corals were beautifully colored – brilliant red, orange, yellow, and they were attached to the hard rocks and most abundant under the overhang of ledges,” Ryan recalled. “They’re very fragile. If there was any significant dredging within the canyons, they would be ripped up and destroyed.”
The three canyons that are part of the new, 4,913-square-mile Northeast Canyons and Seamounts Marine National Monument had been mapped in the early 20th century by the U.S. Coast and Geodetic Survey, but Ryan’s close-up view from the highly maneuverable mini sub revealed a far more intricate landscape with many gullies, similar to canyons on land.
The canyons, which the team realized had formed underwater, have been undergoing constant erosion caused by the tidal currents, sediments sweeping down the gullies, avalanches, freshwater from aquifers seeping through porous layers of rock, and also by marine life itself, Ryan said.
“One of the things we see looking out the window of the Alvin as you’re into an outcrop or on a muddy field or going through anemone gardens, are creatures coming out of the sediment, spouting sand and mud into the air because of what they’ve burrowed. You see fish diving in to eat these creatures. They make a splash and mud comes up into a cloud and the mud cloud drifts down slope, so it’s a constant biological erosion taking place,” Ryan said. “From time to time, an older canyon gets filled up and then it starts to erode again. So it’s a complex history of cut and fill, cut and fill.”
The team’s geologic history of the canyons also provided new knowledge about the region’s oil potential.
This was around the time of the oil crises, and the federal government had approved some commercial oil exploration in the region. If the sediment had built up at about the same rate over time, potentially carbon-rich, 60-80 million-year-old deltas could have matured into oil and gas. That wasn’t what Ryan’s team found, however. The scientists discovered that the continental shelf at the canyons they explored had been subsiding much faster millions of years ago and that its sinking and the sediment build-up had slowed through time.
“This cretaceous formation was far too shallow to ever have been heated into petroleum, and that whole stage of early exploration was a wipe out,” Ryan said. “Three days of diving in one of these canyons put together a picture of Georges Bank that then when drilled was found to be the same.” |
Though crayons are more commonly associated with craft projects, the kid-friendly art supply is also a suitable subject for a variety of science projects. Crayons are an inexpensive material that can demonstrate a range of science concepts including changing states of matter or surface resistance. Adjust projects to reflect the age or academic level of the students or project participants.
Kindergarten and elementary school students explore the way that heat changes a crayon's physical properties with a crayon recycling project. In the project's most basic form, students peel the paper off crayons or crayon bits too small to use and place the pieces into the tins of a muffin tray. Left in the sun or on a warming tray, the crayons will melt into a single mass than can be cooled and popped out. Encourage students to experiment with mixing different brands of crayons, stirring up the colors or adjusting the heating mechanism to identify how the variables affect the process or final product.
Crayon Melting Points
Though it's true that all crayons can be melted down, it's not the case that all types or colors of crayons will melt in the same way or at the same temperature. Upper elementary or middle school students explore the variables that affect how a crayon melts with a controlled experiment. One variable that impacts melting points is color, as the pigments and dyes in crayons can impact their melting properties. Students place fragments of same-colored crayons in individual muffin tins and melt the crayons on a hot plate, measuring either the time until full melting or the temperature at the melting point of each color. Another option is to test different brands of crayons to determine which have higher or lower melting points.
Crayons and Paint
Crayons are made largely of paraffin wax, and the wax creates a barrier or coating that leave the pigment on a surface like paper or wood. When you apply a layer of paint to a crayon drawing, different types of paint will react differently depending on the type of paint. A student science project could explore the way different types of paint, such as oil-based, acrylic or water color, react to a crayon drawing. A student might predict which types of paints will cover the wax and which will be repelled. Another variation might be to test different types of crayons or alter the type of surface material to determine how the variables impact the relationship between the paint and the crayon.
Crayon Brand Comparison
A comprehensive and practical science project on crayons is to compare a range of brands of crayons across several different variables. The student must first identify the types of qualities that affect a crayon's performance or value. A student might opt to test strength, color vibrancy, melting point and rate of wear with regular use. A test of the crayons' strength might involve clamping the crayon to a table and then suspending weights from one end until it breaks. A randomized panel of participants might rate the crayons' vibrancy, while a melting test identifies which crayons melt the fastest. To test regular wear, the student might draw or color repeating shapes until the tip of the crayon reaches the base of the paper. After the data are collected, rate the crayons based on their performance across each category.
- "Scholastic"; Crayon Creations; Ellen Booth Church;
- Crayola.com: Can We Help
- Crayola.com: Melting Crayons
- "Gadgetology: Kitchen Fun with Your Kids, Using 35 Cooking Gadgets for Simple Recipes, Crafts, Games and Experiments"; Pam Abrams; 2007
- Jupiterimages/Goodshoot/Getty Images |
In our daily experience, most of us deal with three phases of matter: solid, liquid, and gas. A fourth, high-energy phase of matter, plasma, occurs in high energy processes as near as a fire or as far away as the core of a star. For decades, the existence of a fifth, low-energy form of matter, known as Bose-Einstein Condensates (BECs), was only a theoretical possibility. In 2001, the Nobel Prize for Physics went to Eric Cornell, Wolfgang Ketterle, and Carl Wieman, who used lasers, magnets, and evaporative cooling to bring about this fascinating new phase of matter.
BECs have strange properties with many possible applications in future technologies. They can slow light down to the residential speed limit, flow without friction, and demonstrate the weirdest elements of quantum mechanics on a scale anyone can see. They are effectively superatoms, groups of atoms that behave as one.
The theory of BECs was developed by Satyendra Nath Bose and Albert Einstein in the early 1920s. Bose combined his work in thermodynamics and statistical mechanics with the quantum mechanical theories that were being developed, and Einstein carried the work to its natural conclusions and brought it to the public eye. At the time, none of the necessary technology was available to make BECs in the lab: cryonics were extremely limited, and the first laser wasn't even built until 1960. The fine control allowed by modern computers was also a prerequisite. Because of all of these technological hurdles, it wasn't until 1995 that experimenters were able to force rubidium atoms to form this type of condensate.
Phases of Matter
We can distinguish among the phases of matter in several ways. On the most elementary level, solids have both fixed volume and fixed shape; liquids have fixed volume, but not fixed shape; and gases have neither. Solids have stronger intermolecular bond structure than their corresponding liquids, which in turn have stronger intermolecular bond structure than gases. We can also differentiate between phases of matter by considering energy levels. Solids have the lowest energy levels (corresponding with the lowest temperatures), while liquids and gases have increasingly higher levels. At the top end of this scale, we can add plasmas, which are energetic enough to emit all kinds of energy in the form of heat and photons.
Bose-Einstein Condensates represent a fifth phase of matter beyond solids. They are less energetic than solids. We can also think of this as more organized than solids, or as colder -- BECs occur in the fractional micro-Kelvin range, less than millionths of a degree above absolute zero; in contrast, the vacuum of interstellar space averages a positively tropical 3 K. BECs are more ordered than solids in that their restrictions occur not on the molecular level but on the atomic level. Atoms in a solid are locked into roughly the same location in regard to the other atoms in the area. Atoms in a BEC are locked into all of the same attributes as each other; they are literally indistinguishable, in the same location and with the same attributes. When a BEC is visible, each part that one can see is the sum of portions of each atom, all behaving in the same way, rather than being the sum of atoms as in the other phases of matter.
Wavefunctions and Quantum Spin
At the very beginning of the study of quantum mechanics, it was discovered that light could behave either as a wave or as a particle, when before it had only been treated as a wave. This discovery led Pierre de Broglie to theorize that perhaps matter could be treated as a wave, and not just as a particle. This theory was tested and found to be true: matter behaves as both a wave and a particle, depending on how it is observed.
Each atom has a wavefunction that describes its behavior as a wave. This wavefunction can be used to determine the probabilities that the atom will be in a given place or have a certain momentum or other useful properties. Each particle can also be determined to have a spin. While many physics terms mean something other than their everyday usage, "spin" seems to be a behavior that acts just as if the particle is spinning around an axis.
The amount of spin a particle can have depends on the type of particle. Fermions (like electrons) can have spin values that are +/- 1/2, +/- 3/2, +/- 5/2, etc.; bosons (like some isotopes of hydrogen and helium) have spin values that are whole numbers. Fermions obey the Pauli Exclusion Principle, whereas bosons do not. Bosons and fermions can both be composite particles; they don't have to be "indivisible" particles. The same physics will hold for bosons such as photons and K mesons as will hold for hydrogen and helium atoms, as long as the atoms are close to their ground state.
The Pauli Exclusion Principle (which was determined experimentally) states that no two fermion particles can occupy the same state at the same time. They must have some way of being distinguished, whether by location, spin state, or some other property. That means that if one fermion is in a local ground or minimum energy state, the next fermion in the area must be in a higher energy state. For bosons, however, the Pauli Exclusion Principle is irrelevant by definition -- so all of the bosons can be in the same state at the same time. They don't have to be distinguishable from each other. When this happens, a Bose-Einstein Condensate is created.
Creating a Condensate
Because of the specialized conditions under which they can exist, Bose-Einstein Condensates have only been created in laboratories. First, an experimenter takes bosons that have been purified of other elements and puts them in a vacuum. Popular choices for these bosons include specific isotopes of atoms of helium, sodium, rubidium, and hydrogen. Not all isotopes are bosons, and only bosons can form a BEC. The initial method of making a rubidium condensate is the most straightforward, and further methods have been refinements of the same general principles of cooling.
The atoms are first cooled to fractions of a degree Kelvin. They need to be virtually motionless in order to stay in the BEC ground state. Then they are put into a magnetic trap, keeping them in a limited area. The magnetic trap is arranged with eight magnets in what is known as a quadrupole configuration. The magnets we are most familiar with in daily life are dipole magnets: a two-ended field of magnetization with one polarity at one end and the opposite polarity at the other end. A quadrupole configuration looks more like a plus sign, with the opposing points having the same polarity.
When the atoms are in a quadrupole magnetic trap, the way they interact is primarily through their spin; higher order considerations such as magnetostatic interactions are limited by the trap. A laser with a precisely calculated wavelength shines on the atoms, and as the light scatters off the atoms, it takes with it more energy than it brought into the process. The Doppler shift from the higher energy atoms is calculated so that they "see" the laser of the right color, and the atoms that are already lower energy stay unexcited. The energy state of the atoms is, of course, directly related to how quickly they are moving, so the first wavelength used is selected for the fastest atoms present.
The laser's wavelength must be very precisely tuned to the atom. One of the hardest problems physicists face in making BECs is keeping the laser tuned to the right frequency despite outside interference; even a car passing by on the road outside a lab may cause enough vibration to knock the laser out of its desired frequency. To make things worse, as the average speed of the atoms decreases and their energy level goes down, the desired Doppler shift changes, so the laser must be retuned to match the new "high" energy atoms. In order to account for motion from all directions, the lasers shine in on the atoms from opposite points on all three axes. Further, the magnetic trap is combined with an optical trap that pushes atoms back towards the center if they stray too far. This laser set-up is known as "optical molasses."
The atoms are then cooled further through what is known as evaporative cooling. Essentially, evaporative cooling allows the faster, more energetic atoms to escape from the trap, leaving only the slowest, coolest, least energetic atoms behind. Of all the materials used, rubidium was the easiest to make into a BEC because its atoms are the largest -- they achieve low velocities at the highest temperature (energy) because mass relates to energy (hydrogen was the hardest BEC to form, but researchers think it may have superior applications because of its small size). When the atoms get to the point where only ground state atoms are left, they coalesce into a Bose-Einstein Condensate, which behaves like a superatom. The first condensate consisted of 2000 atoms; some condensates have been created that are the size of a dime (several million atoms), but still behave as one giant atom.
Properties and Future Applications
Most research into Bose-Einstein Condensates serves as "basic" research -- that is to say, it is more concerned with knowing more about the world in general than with implementing a specific technology. However, there are several potential uses for BECs. The most promising application is in etching. When BECs are fashioned into a beam, they are like a laser in their coherence. That is to say, both a laser and a BEC beam run "in lock step," guaranteeing that an experimenter can know how a part of the beam will behave at every single location. This property of lasers has been used in the past for etching purposes. A BEC beam would have greater precision and energy than a laser because even at their low kinetic energy state, the massive particles would be more energetic than the massless photons. The major technological concerns with a BEC beam would be getting a clean enough environment for it to function repeatedly and reducing the cost of BEC creation enough to use BECs regularly in beams. However, BEC beams or "atom lasers" could produce precisely trimmed objects down to a very small scale -- possibly a nanotech scale. Their practical limits will be found with experimentation.
In some ways, the atom laser works as the opposite of a laser. A laser can produce more photons from the atoms at hand, but an atom laser can only deal with the number of atoms it starts with. Rather than being knocked into an excited state, as atoms that emit laser photons are, BEC atoms are cooled down to the ground state. Unlike a laser beam, an atom laser beam could not travel far through air and would fall due to gravity. However, these differences can be calculated and accounted for in the future uses of the atom laser.
One of the most commonly known properties of BECs is their superfluidity. That is to say, BECs flow without interior friction. Since they're effectively superatoms, BECs are all moving in the same way at the same time when they flow, and don't have energy losses due to friction. Even the best lubricants currently available have some frictional losses as their molecules interact with each other, but BECs, while terribly expensive, would pose no such problem.
One of the problems physicists run into when teaching quantum mechanics is that the principles are just counter-intuitive. They're hard to visualize. But videos of BEC blobs several millimeters across show wave-particle duality at a level we can comprehend easily. We can watch something that acts like an atom, at a size we could hold in our hands. MIT researchers have produced visible interference fringe patterns from sodium BECs, demonstrating quantum mechanics effects on the macroscale. That alone is worth notice.
Perhaps most interestingly, BECs have been used to slow the speed of light to a crawl -- from 186,282 miles per second (3x108 m/s) in a vacuum to 38 miles per hour (17 m/s) in a sodium BEC. No other substance so far has been able to slow the speed of light within orders of magnitude of that speed. Although so far this discovery has not been applied to any technological problems, researchers at Harvard suggest that it might make possible revolutions in communications, including possibly a single-photon switch.
The Bose-Einstein Condensate is to matter as the laser is to light -- the analogy is precisely that simple. It took twenty years from the invention of the laser until its technological applications began to take off. At first, lasers were considered too difficult to make to ever find use in everyday applications; now, they're everywhere. The characteristics of BECs, specifically their response to sound and other disturbances, are still under investigation, but they hold the promise of many curious developments to come.
The BEC Homepage.
An article on atom lasers. |
Agriculture, forestry, and fishing
Arable land covers nearly one-third of Cuba. The soil is highly fertile, allowing up to two crops per year, but the highly variable nature of annual precipitation has historically plagued agriculture. Subterranean waters are important for irrigation. A small but increasing share of crops is produced on private land or by cooperatives that are not owned by the state. Under Raúl Castro’s rule, some private farmers have been permitted to cultivate unused government land to increase food production.
The Cuban economy has depended heavily on the sugarcane crop since the 18th century. Vast areas have been leveled, irrigated, and planted in sugarcane, and yields per acre have increased with the application of fertilizers. Sugar output, except in years of drought or sugarcane blight, increased after the introduction of mechanized harvesters in the early 1970s but plunged after the breakup of the Soviet Union in 1991. Many of the island’s sugar mills closed, and sugar production continued to decline in the early 2000s.
Apart from sugarcane, the chief crops are rice (the main source of calories in the traditional diet), citrus fruits (which are also an important export), potatoes, plantains and bananas, cassava (manioc), tomatoes, and corn (maize). Fruit trees include such citrus varieties as lemon, orange, and grapefruit; some species of the genus Annona, including the guanábana (soursop) and anón (sweetsop); and avocados and papayas. Tobacco, traditionally the country’s second most important export crop, is grown mainly in the Pinar del Río area in the west and also in the centre of the main island. Coffee grows mainly in the east, where Guantánamo city is known as the “coffee capital” of Cuba. Other products include cacao and beans. Cuba imports large amounts of rice and other foodstuffs, oilseeds, and cotton.
Cattle, pigs, and chickens are the main livestock. The number of cattle increased in the 1960s, as veterinary services advanced and irrigation systems improved, but decreased over subsequent decades. Brahman (zebu) cattle, the dominant breed, thrive in the tropical climate but yield low amounts of milk. Holstein cattle are more productive but prone to illness in the Cuban environment. Cuban farmers raise approximately half as many pigs as cattle.
The supply of Cuban timber is limited. Pine trees are found throughout the country, and durable mahogany is of potential economic importance, while ebony (Diospyros) and granadilla (cocus, or West Indian ebony; Brya ebenus) provide beautiful and valuable wood.
Fishing resources are significant on the coast and at sea. Among the types of fish caught locally are tuna, hake, and needlefish. The overall volume of fish, crustaceans, and other seafood landed increased sevenfold during the period 1959–79, largely because the government, with the help of Soviet financing, invested heavily in fishing vessels and processing plants. Landings subsequently decreased from the late 1980s to the late 1990s, after the breakup of the Soviet Union caused reduced funding. By the early 21st century, Cuba had diversified its fishing activities to include aquaculture (sea bream, sea bass, tilapia, and carp). It also increased the number of processing plants, especially for shrimp and lobster, with foreign investment from Canada and European Union countries.
Resources and power
Domestic petroleum and natural gas deposits supply a growing portion of the country’s needs, but the majority is met by imports from Mexico and Venezuela. In fact, since the 1990s Cuba has received free oil from Venezuela in exchange for sending thousands of its doctors to treat Venezuela’s poor. In the mid-2000s Venezuela funded the renovation of a dilapidated oil refinery in the Cienfuegos area of Cuba. The refinery has the capacity to refine hundreds of thousands of barrels of the oil imported from Venezuela. Peat, concentrated in the Zapata Peninsula, is still the most extensive fuel reserve. Nickel, chromite, and copper mines are important to Cuba, and beds of laterite (an iron ore) in the Holguín region have considerable potential. Nickel ore, which also yields cobalt, is processed in several large plants, and Cuba is a world leader in nickel production. There are also major reserves of magnetite and manganese and lesser amounts of lead, zinc, gold, silver, and tungsten. Abundant reserves of limestone, rock salt, gypsum, kaolin (china clay), and marble are found on Juventud Island.
Industrial production accounts for slightly more than one-tenth of the gross domestic product (GDP). Tobacco, processed foods (including sugar), and beverages are the most valuable products. Chemical products, transport equipment, and machinery are also important.
The banking system has been operated by the state since 1966 through the National Bank of Cuba, which sets interest rates, regulates foreign exchange, and issues currency (the Cuban peso and the convertible peso). There are no stock exchanges. Foreign investment was prohibited until 1982, when a joint-venture law was enacted. The government has had increasing success at attracting private capital and foreign-owned commercial banks since the 1990s, especially with European and Canadian investors; however, U.S. investment has been withheld because it violates the Helms-Burton law enacted by the U.S. Congress in 1996. |
Together as One!
The Beginning of Slavery
Although our Declaration of Independence States that all men are created equal, that didn't apply to slaves. Slaves were not treated like human beings. Most slaves were often physically abused, which included whippings and other forms of punishment. Being a slave also took an emotional toll on a person. Owners yelled rude racial slurs and inappropriate names. Slaves were overworked and sleep-deprived. Most slaves worked from the time the sun rose, to the time the sun set. They had very little breaks throughout the day. Slaves would be beaten if they were caught trying to read or write. Slave owners wouldn't allow them to attend school because they thought if they had more knowledge, they could find ways to escape. Slave women were often forced to have children to produce more help. Slaves were considered property, meaning slave owners could do whatever they wanted to with them. This meant punishments of their choice. If a slave was caught trying to escape, they could be severely beaten or even put to death. Could you ever imagine living this way?
It wasn't until 1865 in which slavery was outlawed in America. The 13th Amendment made slavery illegal in the United States. There were many events leading up to the abolishing of slavery. In 1807, the British Parliament outlawed Britain's participation in the African Slave Trade. In 1808, the United States does the same thing. In 1822, segregated public schools for African Americans opened, allowing them to get an education. Anti-slavery groups were forming and protesting against the government. The north was strongly against slavery but allowed it to happen in the south. President Lincoln hated slavery but allowed it to go on in the south as long as it didn't spread to other states. Conflicts arose which caused our country to go into war against itself. Lincoln issued the Emancipation Proclamation, freeing all slaves.
My Plan to Improve Race Relations
The legacy of slavery has contributed to these racial tensions. Some African Americans believe that slavery is the reason for police brutality. We've liked to say we have moved on from slavery, but have we? Slavery or racial segregation will never exist in America but that doesn't mean there will never be racial tension. I believe we need to fix this problem. All races should be comfortable and feel safe living in America, where everybody is created equal. There should never be a question about are police being violent to African Americans. Together as one!
I have a plan to improve race relations in the United States. I want to create a museum honoring the great accomplishments of all our cultures throughout history. The brave people who suffered through slavery will be honored at this museum as well as other important racial movements in America. There will be a special exhibit to honor abolitionists and people who contributed to the outlaw of slavery. For example, Harriet Tubman, John Brown, President Lincoln and more. It will also recognize more recent events like Rosa Parks refusing to give up her bus seat, Martin Luther king during his anti-segregation movement, and even our first African American president! Once the museum is finished, we will try to get as many schools as possible to make the museum a required field trip. I believe that racial tension starts at a young age. Little children don't know what is right and what is wrong, so discriminating against other cultures is common. If we bring children in 1st or 2nd grade here, we can teach them that all men are created equally. I believe that this museum is a great way to honor and teach African American culture. The museum will be call "The African American Legacy."
Harriet was an African American abolitionist who helped to eliminate slavery. She helped slaves escape using the Transcontinental Railroad.
President Lincoln sent us into war in hopes to keep America together. He then issues the Emancipation Proclamation, outlawing slavery.
Martin Luther King Jr.
Martin Luther King Jr. didn't help to end slavery, but he did protest African American segregation. He felt that whites and blacks shouldn't have to be separated. He delivered the "I Have a Dream" speech, which is one of the most famous speeches. |
What signs indicate that testing for Primary Immune Deficiency should be considered?
|1|| Severe fatigue all the time
|2||Two or more
pneumonia within one year|
|3||Two or more months of antibiotics with little effect |
4||Failure of an infant to gain weight or grow normally |
5||Recurrent, deep skin or organ abscesses |
6||Persistent thrush in mouth or elsewhere on skin, after age one|
7||Need for intravenous antibiotics to clear infections |
8||Two or more deep-seated infections such as meningitis, osteomyelitis, cellulitis or sepsis |
What causes immune deficiency?
Mycoplasma infections and vitamin deficiencies cause most of immune deficiency. A woman who had immune deficiency , no hair and infertility got pregnant, grew hair in one month after using a tens unit and her multiple infections went away. Read the story in our e-book.
What types of primary immune deficiency diseases exist?
Primary immune deficiency diseases cause increased susceptibility to infections as well as other problems. For simplicity, these diseases can be categorized into four groups according to what part of the immune system is affected:
1. ANTIBODY DEFICIENCIES
Antibodies are proteins made by specialized white blood cells: B cells (B lymphocytes) and plasma cells. The function of antibodies is to recognise infectious agents so that they can be blocked. Two examples of antibody deficiencies are:
- Common Variable Immune Deficiency (CVID) is the most common form of antibody deficiency, usually presenting with recurrent chest and sinus infections in childhood or early adulthood, although most cases are diagnosed in adults. Early recognition can prevent permanent damage to the lungs called bronchiectasis.
- The best test for CVID is IgG level and IgG subclass testing see the link on left column.
- X-linked Agammaglobulinaemia can present in infancy, later childhood or adulthood. Infants with this deficiency develop recurrent pus producing infections of the ears, lungs, sinuses and bones and can get infections in the bloodstream and internal organs. They are also susceptible to certain viruses such as hepatitis and polio.
2. COMBINED IMMUNE DEFICIENCIES
T cells (T lymphocytes) are specialized white blood cells that are critical to a healthy immune system. People who lack T cells also tend to have weak antibody defenses, and this is called combined immunodeficiency. These disorders are very rare and hereditary. The most common is X-linked Severe Combined Immunodeficiency (SCID) which is due to a defective gene for T cell growth. Patients are usually diagnosed within the first year of life and require gene therapy or bone marrow transplantation to survive.
3. COMPLEMENT DEFICIENCIES
The complement system consists of a group of proteins that attach to antibody coated foreign invaders like bacteria and viruses. People with complement deficiencies lack may develop antibodies that react against the body's own cells and tissues. The most common of these deficiencies is C2 Deficiency. This defect can cause an autoimmune disease such as Systemic Lupus Erythematosus (SLE) or can result in severe infections such as meningitis. The illnesses usually appear in childhood or in early adulthood.
4. PHAGOCYTIC CELL DEFICIENCIES
Phagocytes include white blood cells (neutrophils and macrophages) that engulf and kill antibody coated foreign invaders. Phagocytes can be defective either in their ability to kill pathogens or in their ability to move to the site of an infection. In either case, the defect results in increased infections. The most severe form of phagocytic cell deficiency is Chronic Granulomatous Disease which is an inherited deficiency of molecules needed by neutrophils to kill certain infectious organisms. People with chronic granulomatous disease develop frequent and severe infections of the skin, lungs and bones and develop localised, swollen collections of inflamed tissue called granulomas.
Improved therapy can lead to a better and longer life
Research has led to improved therapy for people with primary immune deficiency diseases.
Treatment options include:
The use of antibiotics to treat and prevent infections and an action plan for early management of infections are key elements in the treatment of primary immune deficiency diseases.
Immune system molecules, such as interferon gamma, can be used to improve immune function and reduce infection in primary immune deficiency diseases.
III. IMMUNOGLOBULIN REPLACEMENT THERAPY
One of the most effective and most commonly used treatments for primary immune deficiency diseases is immunoglobulin replacement therapy, to replace antibody levels. This can be injected into the vein (intravenous immunoglobulin or IVIG) about once a month, or administered at home in certain cases using injections under the skin (subcutaneous immunoglobulin or SCIG). These products must be restricted because of limited supply and doctors need to follow specific guidelines to ensure that the product goes to those most in need.
To ensure future supplies of immunoglobulin replacement therapy people can assist by regularly donating plasma to the Australian Red Cross Blood Service. To find out how, where and when you can donate plasma, phone the Australian Red Cross Blood Service on 13 14 95.
IV. BONE MARROW TRANSPLANTATION
For patients with combined immune deficiency diseases, transplantation of bone marrow cells from a family member with identical human leukocyte antigens (HLA) can result in normal immune function. Tissue typing of human leukocyte antigens (HLA) greatly decreases the risk of rejection and of graft versus host disease (GVHD).
V. OTHER TREATMENTS
There are several other treatments available for disorders associated with primary immune deficiency diseases.
- Common Variable Immune Deficiency (document currently under review)
- Immunoglobulin replacement therapy for Primary Immune Deficiency Disease |
A new study may explain these strange looking ring patterns on Venus’s surface. These geological markers are called coronae and occur when plumes of hot molten rock rise up and disturb the cooler material above it. The rigid surface is then cracked and molten rock can flow through cracks as magma. Scientists did tests in which they cooled particles of silica in fluid from above to simulate the way Venus’s surface cooled over time; next, they heated up the fluid below, observing the ways in which it rose and disturbed the solid above. Scientists also see evidence of plume-induced subduction, which occurs when the cool, solid rock falls into the cracks made by the hot plumes.
This is very interesting because previously, scientists were fairly certain that Venus had little to no geological activity anymore. It was compared to Earth in many ways, but never in terms of geological processes. This opens up the door for more investigation into how Venus may be more similar to Earth than we originally suspected. This can also teach us more about our planet’s past and formation; Earth could have undergone similar processes because it’s about the same size as Venus. |
Warming. What Evidence?
The most often cited evidence that the surface of the earth is warming is the global record resulting from a combination of many weather station and ship measurements Figure 3.1 gives the latest version supplied by the Climate Research Unit, University of East Anglia (1)
Figure 3.1 Global surface air temperature, as compiled by the University of East Anglia (1)
This graph is reproduced, in one form or another, no less than nine times in Chapter 2 “Observed Climate Variability and Change” of the 2001 IPCC Report (2). It is also reproduced frequently in press articles about global warming. It is therefore important to discuss in some detail whether its claim to represent the mean surface temperature of the earth can be justified.
Temperature records of local weather were established soon after the invention of thermometers and temperature scales in the eighteenth century. Early thermometers were unreliable and difficult to calibrate (3, 4,5). Liquid-in-glass thermometers required a capillary tube of uniform diameter and a clearly divided scale. Glass is a cooled liquid which slowly contracts over time. Liquid-in glass thermometers therefore read high if they are not frequently calibrated. This is true even of modern thermometers with improved glass.
The earlier measurements, up to 1900 or so, would have been made on thermometers calibrated in single degrees (usually Fahrenheit), made from ordinary glass. One possible reason for the rise in temperature shown between 1910 and 1940 is the difficulties of calibration of thermometers in remote parts of the world during the two world wars.
The early measurements were made mainly near large towns in the Northern Hemisphere. Even today, measurements are not available for many regions of the earth’s surface, particularly those remote from cities and buildings.
The instrumentation used to measure temperature in weather stations has hardly changed for over a century. Figure 3.2 shows the equipment currently in use at the airport on the Isle of Man. On the left is the apparatus for measuring relative humidity, dependent on the properties of human hair, invented by Benjamin Franklin in the later 18th century. Then Six’s maximum and minimum thermometer and the wet and dry bulb thermometers go back to the same era. The equipment is contained in a Stevenson’s screen, invented by Robert Stevenson, the great lighthouse engineer and father of the author Robert Louis Stevenson, in the early part of the 19th century. At that time thermometers were often placed in direct sunlight, or on the walls of buildings. Change to the general use of the Stevenson screen took many years.
Figure 3.2 Equipment for weather monitoring currently in use at the airport, the Isle of Man.
The screen is painted white, intended to minimise heating by the sun, or loss of heat at night to the atmosphere. However, a white painted surface is not a perfect reflector or a zero emitter. White paint absorbs 30 to 50% of the sun’s radiation, so solar heat contributes to the temperature measured inside the screen, particularly on a still day with little air circulation. It will increase with time as the paint deteriorates or gets dust on it,. or the louvres develop cobwebs. The emissivity of a white painted surface is 85 to 95%, almost as high as black paint. On a still cloudless night the box will cool below the air temperature and influence the thermometer.
air entering the screen will have exchanged heat with any neighbouring
buildings, roads, vehicles, aircraft. It will be affected by the
locality, urban or rural, and by any shelter around the site. All these
properties can change over time, so influencing the measurements. Most
of the changes will tend to increase the measured temperature.
order to read the instruments, the door must be opened, so changing the
air properties within the box. A change to the use of thermistors, which
has taken place recently in some stations will alter this. A change to
automatic recording will
inevitably increase the measured temperature since there will no longer
be a need to open the box to read the instruments.
Temperature records in a particular locality will not agree with those from other localities for a whole variety of reasons; elevation, proximity to a coast, wind conditions, and so on. Also weather stations come and go. There are very few with an interrupted record for very many years. In order to obtain a combined temperature record for many stations the procedure described by Hansen and Lebedeff (6) is used. A Mercator map of the world is divided into latitude/ longitude squares . Figure 3.1 employed 5°x5° squares. Then for each month of each year in the time sequence, acceptable weather station records are identified, and a mean of their monthly averages calculated. This average is then subtracted from the average of the means of the records for the same month in the same square, over a reference period (currently 1960-1990). The result is the temperature anomaly for that month. Fig.1 plots annual, globally averaged, temperature anomalies, calculated in this way, not individual or averaged temperature readings.
intermittent changes in the record (Fig. 3.1), and the irregular
behaviour of neighbouring 5° x 5° grid boxes (7) is inconsistent with a steady global temperature influence, such as
may come as a result of changes in the atmosphere, and
points to mainly local, surface effects as their cause. The rise in the
combined temperature anomalies (Fig. 3.1), from
1910 to 1940, and the fall from 1940 to 1975 cannot be explained as a
result of the greenhouse effect
The use of annual temperature anomaly averages in 5° x 5° squares is illustrated in Figure 3.3 (8) which shows the temperature changes for the winter months of December, January and February between the years 1976 to 1999 for each square. The amounts are indicated by the size of the red dots (for a rise) and blue dots (for a fall). They show the grids where suitable measurements are currently available. The figure also shows that the rise in global temperature from 1976 to 1999, as indicated in Figure 3.1, was largely due to rises in temperature of weather stations in the USA, Northern Europe and the former Soviet Union, for the winter months. There was no temperature rise over this period for weather stations in the Arctic, or Antarctic, and only minimal rises for the Southern Hemisphere, or for the oceans.
3.3. Changes in the combined weather
station and ship temperatures for the winter months December, January and
February for the years 1976 to 1999 (8). Size of dots shows the amount of
change; red dots, a rise, blue dots, a fall
reason for the rise in temperature shown by the global surface temperature
record (Fig. 3.1) over the years 1976 to 1999 was therefore mainly due
to improved winter heating conditions around land-based weather stations
in the Northern Hemisphere.
assumption that a temperature record from a city or an airport can be
considered to represent temperature behaviour of
a surrounding forest, farmland, mountain area, or desert is absurd.
Table 3.1 (9) shows that most of the energy given off by combustion of
fossil fuels is given off in the neighbourhood of urban areas
mean energy emitted by combustion of fossil fuels over the whole world is
0.02 Watts per square metre. However, this energy is emitted in a highly
irregular manner. Over the USA the mean figure is 0.31 W/sq..m, and over
California 0.81 W/sq. m. If energy emission is assumed proportional to
population density, then the figure for San Francisco is 89.24 W/sq m. For
Germany the average is 1.23 W/sq m., and for the industrial area of Essen,
221.65 W/sq. m. For New Zealand the average is 0.8W/ sq m., and for the
city of Auckland, 28.2W/ sq m.
figures should be compared with the claimed `global warming’ made by
the IPCC (10) by the build-up of greenhouse gases in the atmosphere since
the year 1750: the amount of 2.45W/sq
m. It is clear that the
predominant location of weather stations close to cities and airports can
lead to “global warming” from local
energy emissions which
can considerably exceed the claimed effects of greenhouse gases
3.1 violates a basic principle of mathematical statistics which asserts
that a fair average of any quantity cannot be made without a representative sample. Table 2 (11) shows
distribution of climate zones on the earth’s surface. Weather stations
are situated almost entirely in the “urban area” category,
only 1% of the earth’s surface, where energy emission is many
times the amount claimed to be caused by the greenhouse effect.
comparison between temperature measurements made in regions remote from
human habitation and those by weather stations has been displayed
dramatically by Mann and Bradley (12,
13), and promoted by the IPCC (14)
as shown in Figure 3.4
3.4 has several interesting features. The blue curve represents an
amalgamation of “proxy” temperature measurements which mainly involve
deductions from the width of tree rings which
are highly inaccurate since tree rings only show growth in the
summer, so indicate only summer temperatures. The representativity of the
samples is even worse than with the weather station data, but
at least the measurements are all far from human activity.
Mann, and the IPCC, claim that Figure 3.4 proves that the weather station measurements are influenced by “anthropogenic” factors, and, of course, this is probably true. But they fail to see that the “anthropogenic” effect is caused by local energy emissions, not by changes in the atmosphere.
Figure 3.4. Comparison of Northern Hemisphere temperature record from proxy measurements (in blue) with weather station measurements (in red): from Mann and Bradley (12,13 ) and (14). Note the additional temperature rise from proximity of weather stations to urban areas. The gray region represents an estimated 95% confidence interval.
The blue curve, for proxy measurements, shows a recent increase, though within the error estimates from past measurements. Some of this increase can be attributed to enhanced tree growth from the increased carbon dioxide.
temperatures deduced from tree rings in Northern Siberia
results do not always confirm a warming trend. As an example, see Figure
3.5. which gives some tree ring measurements from a Northern Siberia (15).
Figure 3.5, in
contrast to Figure 3.4, shows
the `Medieval Warm Period' (900 to 1100), the `Little Ice Age’
in the early 1800s, and the peak in the 1940s, also evident in Figure 3.1,
but shows that, apart
from these sporadic effects, there was no overall “global warming” for
the past 2000 years.
weather station measurements from remote areas, apparently uninfluenced by
changes in the surroundings, also
show no evidence of “global warming” (16,
three authorities responsible for the combined surface record - The
University of East Anglia, The Goddard Institute for Space Studies, and
The Global Historical Climate Network, (GHCN) have
recognised that records close to towns are subject to “urbanisation”
effects, and they have claimed to have
applied “corrections” to
the data, such as those displayed in Figure 3.1.
procedure is to compare a record from an urban area with one in the same
district which is rural, and correct the urban record according to the
difference between them.
method is incomplete. To begin with, it can only be done where suitable
records for comparison are available over a reasonable length of time.
This means the corrections cannot be made where records are sparse,
which applies to most of the globe, and they cannot be applied for recent
records which have not been going long enough. Lists of records that have
been “corrected”, are only available from Hansen
(18), and the `corrections' seem to be very small. The `corrections' claimed by
the University of East Anglia do not seem to exist (19) .
The most serious defect of the method is that it assumes that there is no `urbanisation' effect for the “rural” record that is taken as a standard. Many studies (20) have shown that there is an `urbanisation' effect even at stations that are far from cities, or near cities with a small local population. Surrounding buildings, roads, concrete, vehicles, aircraft, and increasing shelter can all provide an upwards bias even to `rural' records
It should not really be possible to correct an unrepresentative sample, but the best prospects would be with records that are extensive in coverage and in time, under the same national administration, and of known high quality. The only records that can qualify are those for the continental United States. It is therefore significant that the corrected mean surface record for the United States (Figure 3.6) shows no evidence of overall global warming since 1920, and the slight increase over the previous period is probably due to increased energy exposure of rural weather stations.
Figure 3.6 shows a temperature rise from 1910 to 1940 which fell again from 1940 to 1975. The most plausible explanation for this is the growth of towns in the first period, and the move of weather equipment to airports in the second.
Another factor that has influenced the weather station record Figures (3.1, 3.6), is the variable number of stations available. Figure 3.7 (21) shows how station numbers and available grids have varied. The large increase after 1950 was an increase in rural stations, and of stations at airports, and it partly accounts for the fall in the combined temperature from 1950 to 1975 shown in Figure 3.1. The wholesale closure of mainly rural stations in 1989, combined with the increased energy release at airports partly accounts for the increase in the combined temperature record shown in Figure 3.1 since 1989.
Figure 3.7 Graphs
showing (a) Numbers of temperature
records available (upper
curve) and Maximum and Minimum records available (lower
curve). (b) Number
of 5° x 5° grids available with temperature records (upper curve) and
Maximum and Minimum temperature (Lower curve)
there is the question of sea surface records. Figure 3.1 claims to
incorporate sea surface records. As the ocean is 70.8 percent of the
earth’s surface, inclusion of sea surface temperature records is rather
vital for representativity.
Sea Surface temperature records are voluminous (80 million observations) and extensive, but they suffer from serious defects. Weather station records, plus the limited numbers of fixed buoys, have temperatures taken in the same place over a period of time by qualified staff and continuity of administration. Ship measurements are rarely in the same place, the procedures are far from standard, and the staff and control are often less than professional. Most early measurements were by recovery of samples by buckets, and more recent measurement at the engine intake. There are also measurements of night marine air temperature, from deck measurements, usually from a Stevenson screen.
and Parker (22) suggested corrections to sea surface temperature
measurements which have led to their amalgamation with the land-based
measurements by British workers to give the combined record of Figure 3.1.
One persistent problem is incomplete information; for example, what sort
of bucket was used to collect a sample. When in doubt, Folland and Parker
allow the sea surface temperature to coincide with the land-based
temperature, which means that
there was little obvious change from the addition of the sea surface data.
that the use of sea surface data is justified have recently been shown by
Parker himself, in association with Christy and others (23) who found that there is a discrepancy between the current
measurement of sea surface temperature by engine intakes, and the
measurements closer to the surface made by fixed buoys and on ship’s
Figure 3.8 (24) shows a comparison between the accepted surface/sea surface combined measurements, those modified by the correction of Christy et al (23) to incorporate only marine air temperatures, and the globally averaged satellite measurements. It shows the large discrepancy between sea surface and engine intake measurements, and the corresponding lack of overall increase shown by the satellite measurements (see below).
Figure 3.8 Comparison
between blended land surface and sea surface temperatures, blended land
surface and marine air temperatures and satellite temperatures, from 1978
The United States compilers of global temperature (21, 25) refuse to recognise the sea surface data from ships as reliable, so that their compilations deal only with land-based records, plus recent sea surface data from satellites
satellite measurements of mean sea surface temperature (26) have found no
evidence of distinguishable warming for the past 16 years, after allowance is made for the effects of volcanic
eruptions and El Niño events.
The mere presence of a ship in the ocean is bound to influence the temperature of the surrounding sea surface temperature, so that ship measurements are subject to the same influences of greater size and energy consumption in ships as are the land measurements. As for the night marine air temperature, where do they place the Stevenson screen? Usually up against the funnel.
the temperature rise shown in Figure 3.1 between 1910 and 1945 could not
have been due to greenhouse gas increase, the question is, what was the
cause? Figure 3.10. shows
changes in individual 5° x 5° grids over this period (28). As
before, the size of the dots in individual grids gives the size of the
change, red for a rise and blue for a fall. Most of the temperature rise
between 1910 and 1940 (shown by larger red dots) were in the United States
and in the Atlantic ocean.
The US rise is probably a
result of the increased size of American cities when Europe was affected
by war. The ocean measurements would have been affected
by the absence of lights on ships during the two wars, which meant that
measurements had to be made below deck, plus great increases in the size
and energy consumption of ships.
It is interesting that some of the greatest increases over the period took place in the Arctic, whereas the Arctic stations showed a fall in temperature over the period 1901 to 2000 (7). The unrepresentative character of the coverage is evident from Figure 3.10, so the apparent trend cannot be taken too seriously.
Figure 3.10 Temperature change for individual 5° x5° grids from 1910 to 1945. The area of the dot shows size of increase, per decade (red dots) or decrease per decade (blue dots) (28) .
To summarise this section:
surface record (Figure 3.1 and its companions) is not a reliable indicator
of global temperature change.
is based on a heavily biased sample, so that it actually represents a
modified version of the temperature conditions surrounding weather
stations, as influenced by larger than the average energy emissions,
towns, buildings, roads, vehicles and aircraft. The increase of merely
0.6°C over 140 years could easily represent changes in surrounding energy
differences often shown between neighbouring grids on the temperature maps
(7) indicate that they are locally influenced
fact that many more remote stations and
most `proxy’ measurements show no overall warming,
indicate the absence of a steady warming trend over the past century.
temperature in the lower atmosphere confirm this conclusion.
there have been temperature measurements by weather balloons (radiosondes)
from 63 sites which were scattered over the globe, but were mainly over
land, in the more highly populated parts of the world. (29)
The mean global temperature for the lower atmosphere is shown in Figure 3.11 (29)
shows that the temperature of the lower atmosphere was approximately
constant since 1976, in contrast to the combined surface measurements
(Figure 3.1). The annual fluctuations shown by Figure 3.11, are very
similar in Figure 3.1, suggesting that both methods faithfully record
temperature changes due to volcanoes, solar fluctuations and ocean
sudden jump which took place between 1955-1975 and 1976-1999 is difficult
to explain. It could be a consequence of the limited coverage, or changes
in instrumentation, or it could be a genuine climate adjustment. There is,
however, no justification in putting a linear regression line through the
whole series, as there is no evidence of a regular linear change. After
all, the figure for 1958 was the same as that for 1999 .
1979, NOAA satellites have been measuring the temperature of the lower
atmosphere using Microwave Sounder Units (MSUs).
The method is to measure
the microwave spectrum of atmospheric oxygen, a quantity dependent on
temperature. It is much more accurate than all the other measures and,
also in contrast to other measurements,
it gives a genuine average of temperature over the entire earth’s surface. Various efforts to detect errors
have not altered the figures to any important degree. The record is shown
in Figure 3.12 (30)
annual fluctuations in this record agree well with those in the combined
surface record (Figure 3.1) and with the radiosonde record
However, when these fluctuations are removed from this record (31) there
is no overall evidence of a warming trend. The very large effect of the El
Niño event in 1998 gives a spurious impression of a small upwards trend.
The absence of a distinguishable change in temperature in the lower atmosphere over a period of 21 years, is a fatal blow to the `greenhouse' theory, which postulates that any global temperature change would be primarily evident in the lower atmosphere. If there is no perceptible temperature change in the lower atmosphere then the greenhouse effect cannot be held responsible for any climate change on the earth’s surface. Changes in precipitation, hurricanes, ocean circulation and lower temperature, alterations in the ice shelf, retreat of glaciers, decline of corals, simply cannot be attributed to the greenhouse effect if there is no greenhouse effect to be registered in the place it is supposed to take place, the lower atmosphere.
Mean global temperature anomalies of the lower atmosphere,
contrast, the combined surface record (Figure 3.1) is nothing more than a
record of an averaged presentation of the temperature conditions near
weather stations and ships; influenced by the additional energy emissions
associated with urban environments, and
not a valid record of mean global temperature because it uses a
highly biased sample (32)
We therefore have a situation where all direct measurements of mean global temperature show no evidence of a change, at least over the past 21 years, and probably over the past century. There is therefore no reason for any precautionary measures intended to prevent or limit such a temperature change.
(1) Climatic Research Unit, University of East Anglia, UK, http://www.cru.uea.ac.uk
(2) Climate Change 01 Chapter 2, Figures 2.1, 2.4, 2.5, 2.6, 2.7, 2.8, 2.12, 2.19, 2.21.
(3) Anita McConnell 1992 “Assessing the value of historical temperature measurements” Endeavour 16 (2) 80-84
(4) Gray, V R, 2000. “The Surface Temperature Record” http://www.john-daly.com/graytemp/surftemp.htm
(5) Daly, J L, 2000 “The Surface Record : Global Mean Temperature and how it is determined at the surface level”. http://www.greeningearthsociety.org/Articles/2000/surface1.htm
(6) Hansen, J and S Lebedeff 1987 “Global Trends of Measured Surface Air Temperature” Journal of Geophysical Research 92 13345-13372
(7) Climate Change 01 Chapter 2, “Observed Climate Variability and Change” , Figures 2.9 and 2.10, pages 116-7
(8) Climate Change 01 Chapter 2 “Observed Climate Variability and Change” Figure 2.10a, page 117
Dioxide Information and Analysis Center, Oakridge, Tennessee,
Nations Energy Handbook and estimates
(10) .Climate Change 01 Chapter 6. “Radiative Forcing of Climate Change” Figure 6.6, page 392
(11) Adapted from the CIA Factbook, http://www.oda/gov/cia/publications/factbook
(12) Mann, M E, R S Bradley & M K Hughes 1998 “Global-scale temperature patterns and climate forcing over the past centuries" Nature 392 778-787
(13) Mann, M E , R S Bradley & M K Hughes 1999 “Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties and Limitations” Geophysical Research Letters 26 759-76
(14) Climate Change 01 Summary for Policymakers, Fig 1. Page 3
(15) Naurzbaev, K M, & E A Vaganov 2000 “Variation of early summer and annual temperature in east Taymir and Putoran (Siberia) over the last two millennia inferred from tree rings“ Journal of Geophysical Research 105 7317-7326
(16) Daly, J L , 2001. “What the stations say”. http://www.john-daly.com/stations/stations.htm
(17) CO2 Science Magazine. 2001 http://www.co2science.org
(18) Hansen J et al 2001 http://www.giss.nasa.gov/data/update/gistemp
(19) Hughes, W S . 2001 http://www.webace.com.au/~wsh
V R . 2000 “The Surface Temperature Record”
(21) Peterson, T C , and R S Vose 1997 “An Overview of the Global Historical Climatology Network Temperature Database” Bulletin of the American Meteorological Society 78 2837-2849
(22) Folland C K and D E Parker 1995 ”Correction of instrumental biases in historical sea surface temperature data“ Quarterly Journal Meteorological Society 121 319-367
(23) Christy, J R , D E Parker, S J Brown, I Macadam, M Stendel & W B Norris 2001 “Differential Trends in Tropical Sea Surface and Atmospheric Temperatures since 1979” Geophysical Research Letters 28 183-186
(24) World Climate Report 2001 6 (9) “Satellite `Warming' vanishes”.
(26) Strong, A E , E J Kearns K K Gjovig 2000 “Sea Surface Temperature Signals from Satellites - An Update“ Geophysical Research letters 27 1667-1670
(27) Levitus, S, J Antonov, T P Boyer, and C Stephens 2000 Science 287 2225-2229
(28) Climate Change 01 Chapter 2 Figure 2.9b page 116
(29) Carbon Dioxide Information and Analysis Center, Data plotted from http://cdiac.esd.ornl.gov/trends
(30) “Still Waiting for Greenhouse” http://www.john-daly.com
(31) Michaels, P J and Knappenberger 2000, “Natural Signals in the MSU lower tropospheric temperature records” Geophysical Research Letters 27 2905-2908
(32) Gray, V R , 2000 “The Cause of Global Warming.” Energy & Environment 11 613-629
Go to Chapter 4 (to be published in a few days)
Return to `Climate Change Guest Papers' page
Return to `Still Waiting for Greenhouse' main page
FastCounter by bCentral |
Better Students Ask More Questions.
What does the phrase "act out of the character'' mean?
2 Answers | add yours
Elementary School Teacher
Whsomeone one says that someone is "acting out of character", they are implying thathat his person is doing somethinwouldn't normally wouldn't do. If you break down the words in the saying it will make sense. When a person "acts", they are performing or doing something. Character is the standard that a person normally follows or how a person normally acts. So when a person is "acting out of character", the person is doing something they normally wouldn't do.
Posted by miteach25 on September 18, 2012 at 8:53 PM (Answer #1)
Characterization is a literary term that applies to all genres of imaginative literature. In fiction, it refers to the development of well-rounded and lifelike characters that remain believable and consistent throughout a work. Often the presence of well-rounded characters is what distinguishes literary from genre fiction.
In drama, actors literally get into character; thus an undergraduate actor dresses up and wears makeup to act the role of the aging monarch in Shakespeare's King Lear. Stopping in mid-performance to take a cell phone call would be "out of character." In general, any action that is so atypical of the character as to seem radically improbable (a kind grandmother torturing a cat) is described as "out of character."
Posted by thanatassa on September 19, 2012 at 1:54 AM (Answer #2)
Join to answer this question
Join a community of thousands of dedicated teachers and students. |
Science Fair Project Encyclopedia
U.S. Presidential usage
Presidents of the United States have issued executive orders since 1789. There is no United States Constitution provision or statute that explicitly permits this, aside from the vague grant of "executive power" given in Article II, Section 1 of the Constitution and the statement "take Care that the Laws be faithfully executed" in Article II, Section 3.
Most executive orders are directed to various federal agencies or departments of the executive branch to help orchestrate those agencies in their duties.
Other types of executive orders are:
- proclamations, which serve the ceremonial purpose of declaring holidays and celebrations,
- national security directives, and
- presidential decision directives, both of which deal with national security and defense matters.
History in the United States
Until the early 1900s, executive orders went mostly unannounced and undocumented, seen only by the agencies to which they were directed. Others have simply been lost due to natural decay and poor record keeping. However, the State Department instituted a numbering system for executive orders in the early 1900s, starting retroactively with President Abraham Lincoln's Emancipation Proclamation in 1862. Today, only those executive orders dealing with issues of national security are kept from the public.
Until the 1950s, there were no rules or guidelines outlining what the president could or could not do through an executive order. However, the Supreme Court ruled that an executive order from President Harry S. Truman that placed all steel mills in the country under federal control was invalid because it attempted to make law, rather than clarify or act to further a law put forth by the Congress or the Constitution. Presidents since this decision have generally been careful to cite which specific laws they are acting under when issuing new executive orders.
Critics accuse presidents of abusing executive orders, of using them to make laws without Congressional approval and to move existing laws away from their original mandates. Large policy changes with wide-ranging effects have been passed into law through executive order, including the integration of the Armed Forces under Harry Truman and the desegregation of public schools under Dwight D. Eisenhower.
One extreme example of an executive order is Executive Order 9066, where President Roosevelt ordered that all people living in the United States of Japanese descent go to Japanese internment camps. Wars have been fought upon executive order, including Bill Clinton's 1999 Kosovo War. However, all such wars have had authorizing resolutions from Congress. The extent to which the president may exercise military power independently of Congress, and the scope of the War Powers Resolution, remain unresolved constitutional issues in the United States.
Critics fear that the president could make himself a de facto dictator by side-stepping the other branches of government and making autocratic laws. The presidents, however, cite executive order as often the only way to clarify laws passed through the Congress, laws which often require vague wording in order to please all political parties involved in their creation.
To date, the courts have only overturned two executive orders: the aforementioned Truman order, and a 1996 order issued by President Bill Clinton which attempted to prevent the U.S. government from contracting with organizations that had strikebreakers on the payroll. Likewise, the Congress may also overturn an executive order by passing legislation in conflict with it or refusing to approve funding to enforce it. Because the president retains the power to veto such a decision, however, the Congress usually needs a 2/3 majority to override a veto and truly end an executive order.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details |
Ancient Egypt conjures up images of bearded pharaohs, mighty pyramids and gold-laden tombs. Centuries ago, before archaeology became a legitimate field of science, explorers raided Egyptian ruins, seizing priceless artifacts. Collectors knew that these items were valuable, but they had no way of understanding just how much they were worth. Because the civilization's historical records and monuments were inscribed with hieroglyphics, a language no one -- Egyptian or foreigner -- could read, the secrets of Egypt's past were hopelessly lost. That is, until the Rosetta Stone was discovered.
The Rosetta Stone is a fragment of a stela, a free-standing stone inscribed with Egyptian governmental or religious records. It's made of black basalt and weighs about three-quarters of a ton (0.680 metric tons). The stone is 118 cm (46.5 in.) high, 77 cm. (30 in.) wide and 30 cm. (12 in.) deep -- roughly the size of a medium-screen LCD television or a heavy coffee table [source BBC]. But what's inscribed on the Rosetta Stone is far more significant than its composition. It features three columns of inscriptions, each relaying the same message but in three different languages: Greek, hieroglyphics and Demotic. Scholars used the Greek and Demotic inscriptions to make sense of the hieroglyphic alphabet. By using the Rosetta Stone as a translation device, scholars revealed more than1,400 years of ancient Egyptian secrets [source: Cleveland MOA].
The discovery and translation of the Rosetta Stone are as fascinating as the translations that resulted from the stone. Controversial from the start, it was unearthed as a result of warfare and Europe's quest for world domination. Its translation continued to cause strife between nations, and even today, scholars debate who should be credited with the triumph of solving the hieroglyphic code. Even the stone's current location is a matter of debate. This artifact has long held a powerful grip over history and politics.
Since 1802, the Rosetta Stone has occupied a space in London's British Museum. While most visitors acknowledge the stone as an important piece of history, others are drawn to it like a religious relic. The stone is now enclosed in a case, but in the past, visitors could touch it and trace the mysterious hieroglyphics with their fingers.
In this article, we'll learn how the world came to regard this piece of stone as a harbinger of Egypt's secrets. We'll also discuss its history and the circumstances surrounding its discovery, as well as the long and difficult task of deciphering the Rosetta Stone's inscriptions. Last, we'll examine the field of Egyptology and how it evolved from the Rosetta Stone.
We'll begin with the history of the Rosetta Stone in the next section. |
Water chestnut grows in the floating-leaf and submersed plant community. It thrives in the soft sediments of quiet, nutrient rich waters in lakes, ponds, and streams. The plants can survive on mudflats. Uncontrolled, water chestnut creates vast, nearly impenetrable mats of vegetation. Floating mats of water chestnut create hazards for boaters and render previously productive fishing grounds inaccessible. The barbs on the nuts can penetrate shoe leather and pose a hazard to swimmers and beachcombers when the nuts drift to shore. Light penetration and dissolved oxygen, critical elements of a well-functioning ecosystem, are severely reduced in infested areas. Water chestnut out-competes native vegetation and is of little value to wildfowl.
Water chestnut is not known to occur in Indiana. |
Bullying, whether it is verbal, physical, emotional or Internet-based (cyberbullying), can have devastating effects on a child’s self-esteem and can cause depression, anxiety, loneliness and even thoughts of suicide. Therefore, if you know that your child is being bullied, it is imperative to act quickly. Here are some tips on how to respond to the child and deal with the source of the problem:
- Never advise a child to ignore bullying behavior, and never blame the child by suggesting that he or she provoked it.
- Get your child to tell you precisely what happened, where it happened, when it happened and who was involved. Also, establish whether anyone witnessed the incident(s).
- Praise your child for telling you about the bullying, and let the child know that it is not his or her fault. Never suggest that your child handled the situation badly.
- Explain that you will think about how to deal with the situation and will let him or her know how you intend to handle it.
- Never encourage physical retaliation.
- Contact your child’s teacher or the principal and unemotionally explain the details of the incident(s). The school has an obligation to deal with the matter effectively, but explain that you want to work with them to put a stop to the bullying. Never contact the bully’s parents directly.
- Keep a record of any meetings that you have with the school, and if the situation continues, escalate the matter to the school superintendent or the school board. |
Chromosomes: In a prokaryotic cell or in the nucleus of a eukaryotic cell, a structure consisting of or containing DNA which carries the genetic information essential to the cell. (From Singleton & Sainsbury, Dictionary of Microbiology and Molecular Biology, 2d ed)Chromosome Mapping: Any method used for determining the location of and relative distances between genes on a chromosome.X Chromosome: The female sex chromosome, being the differential sex chromosome carried by half the male gametes and all female gametes in human and other male-heterogametic species.Chromosome Banding: Staining of bands, or chromosome segments, allowing the precise identification of individual chromosomes or parts of chromosomes. Applications include the determination of chromosome rearrangements in malformation syndromes and cancer, the chemistry of chromosome segments, chromosome changes during evolution, and, in conjunction with cell hybridization studies, chromosome mapping.Chromosome Aberrations: Abnormal number or structure of chromosomes. Chromosome aberrations may result in CHROMOSOME DISORDERS.Sex Chromosomes: The homologous chromosomes that are dissimilar in the heterogametic sex. There are the X CHROMOSOME, the Y CHROMOSOME, and the W, Z chromosomes (in animals in which the female is the heterogametic sex (the silkworm moth Bombyx mori, for example)). In such cases the W chromosome is the female-determining and the male is ZZ. (From King & Stansfield, A Dictionary of Genetics, 4th ed)Chromosomes, Human, Pair 1: A specific pair of human chromosomes in group A (CHROMOSOMES, HUMAN, 1-3) of the human chromosome classification.Chromosomes, Human: Very long DNA molecules and associated proteins, HISTONES, and non-histone chromosomal proteins (CHROMOSOMAL PROTEINS, NON-HISTONE). Normally 46 chromosomes, including two sex chromosomes are found in the nucleus of human cells. They carry the hereditary information of the individual.Chromosomes, Bacterial: Structures within the nucleus of bacterial cells consisting of or containing DNA, which carry genetic information essential to the cell.Chromosome Segregation: The orderly segregation of CHROMOSOMES during MEIOSIS or MITOSIS.Chromosomes, Human, Pair 7: A specific pair of GROUP C CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 11: A specific pair of GROUP C CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 17: A specific pair of GROUP E CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 6: A specific pair GROUP C CHROMSOMES of the human chromosome classification.Chromosome Deletion: Actual loss of portion of a chromosome.Chromosomes, Human, Pair 9: A specific pair of GROUP C CHROMSOMES of the human chromosome classification.Chromosomes, Human, Pair 21: A specific pair of GROUP G CHROMOSOMES of the human chromosome classification.Chromosomes, Plant: Complex nucleoprotein structures which contain the genomic DNA and are part of the CELL NUCLEUS of PLANTS.Chromosomes, Fungal: Structures within the nucleus of fungal cells consisting of or containing DNA, which carry genetic information essential to the cell.Chromosomes, Human, 6-12 and X: The medium-sized, submetacentric human chromosomes, called group C in the human chromosome classification. This group consists of chromosome pairs 6, 7, 8, 9, 10, 11, and 12 and the X chromosome.Chromosomes, Human, Pair 2: A specific pair of human chromosomes in group A (CHROMOSOMES, HUMAN, 1-3) of the human chromosome classification.Chromosomes, Human, Pair 16: A specific pair of GROUP E CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 22: A specific pair of GROUP G CHROMOSOMES of the human chromosome classification.Chromosome Pairing: The alignment of CHROMOSOMES at homologous sequences.Chromosomes, Mammalian: Complex nucleoprotein structures which contain the genomic DNA and are part of the CELL NUCLEUS of MAMMALS.Chromosomes, Human, Pair 13: A specific pair of GROUP D CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 4: A specific pair of GROUP B CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 10: A specific pair of GROUP C CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Y: The human male sex chromosome, being the differential sex chromosome carried by half the male gametes and none of the female gametes in humans.Chromosomes, Human, Pair 8: A specific pair of GROUP C CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 19: A specific pair of GROUP F CHROMOSOMES of the human chromosome classification.Chromosome Disorders: Clinical conditions caused by an abnormal chromosome constitution in which there is extra or missing chromosome material (either a whole chromosome or a chromosome segment). (from Thompson et al., Genetics in Medicine, 5th ed, p429)Chromosomes, Artificial, Bacterial: DNA constructs that are composed of, at least, a REPLICATION ORIGIN, for successful replication, propagation to and maintenance as an extra chromosome in bacteria. In addition, they can carry large amounts (about 200 kilobases) of other sequence for a variety of bioengineering purposes.Chromosomes, Human, X: The human female sex chromosome, being the differential sex chromosome carried by half the male gametes and all female gametes in humans.Chromosome Painting: A technique for visualizing CHROMOSOME ABERRATIONS using fluorescently labeled DNA probes which are hybridized to chromosomal DNA. Multiple fluorochromes may be attached to the probes. Upon hybridization, this produces a multicolored, or painted, effect with a unique color at each site of hybridization. This technique may also be used to identify cross-species homology by labeling probes from one species for hybridization with chromosomes from another species.Chromosomes, Human, Pair 12: A specific pair of GROUP C CHROMOSOMES of the human chromosome classification.Chromosomes, Human, 1-3: The large, metacentric human chromosomes, called group A in the human chromosome classification. This group consists of chromosome pairs 1, 2, and 3.Chromosomes, Human, Pair 5: One of the two pairs of human chromosomes in the group B class (CHROMOSOMES, HUMAN, 4-5).Chromosomes, Human, Pair 15: A specific pair of GROUP D CHROMOSOMES of the human chromosome classification.Karyotyping: Mapping of the KARYOTYPE of a cell.Chromosomes, Human, Pair 14: A specific pair of GROUP D CHROMOSOMES of the human chromosome classification.Chromosomes, Human, Pair 18: A specific pair of GROUP E CHROMOSOMES of the human chromosome classification.In Situ Hybridization, Fluorescence: A type of IN SITU HYBRIDIZATION in which target sequences are stained with fluorescent dye so their location and size can be determined using fluorescence microscopy. This staining is sufficiently distinct that the hybridization signal can be seen both in metaphase spreads and in interphase nuclei.Chromosomes, Human, 16-18: The short, submetacentric human chromosomes, called group E in the human chromosome classification. This group consists of chromosome pairs 16, 17, and 18.Chromosomes, Human, Pair 20: A specific pair of GROUP F CHROMOSOMES of the human chromosome classification.Chromosomes, Artificial, Yeast: Chromosomes in which fragments of exogenous DNA ranging in length up to several hundred kilobase pairs have been cloned into yeast through ligation to vector sequences. These artificial chromosomes are used extensively in molecular biology for the construction of comprehensive genomic libraries of higher organisms.Chromosomes, Human, 13-15: The medium-sized, acrocentric human chromosomes, called group D in the human chromosome classification. This group consists of chromosome pairs 13, 14, and 15.Genetic Linkage: The co-inheritance of two or more non-allelic GENES due to their being located more or less closely on the same CHROMOSOME.Chromosome Breakage: A type of chromosomal aberration involving DNA BREAKS. Chromosome breakage can result in CHROMOSOMAL TRANSLOCATION; CHROMOSOME INVERSION; or SEQUENCE DELETION.Chromosomes, Human, 21-22 and Y: The short, acrocentric human chromosomes, called group G in the human chromosome classification. This group consists of chromosome pairs 21 and 22 and the Y chromosome.Ring Chromosomes: Aberrant chromosomes with no ends, i.e., circular.Chromosome Inversion: An aberration in which a chromosomal segment is deleted and reinserted in the same place but turned 180 degrees from its original orientation, so that the gene sequence for the segment is reversed with respect to that of the rest of the chromosome.Genetic Markers: A phenotypically recognizable genetic trait which can be used to identify a genetic locus, a linkage group, or a recombination event.Chromosome Positioning: The mechanisms of eukaryotic CELLS that place or keep the CHROMOSOMES in a particular SUBNUCLEAR SPACE.Chromosomes, Human, 4-5: The large, submetacentric human chromosomes, called group B in the human chromosome classification. This group consists of chromosome pairs 4 and 5.Molecular Sequence Data: Descriptions of specific amino acid, carbohydrate, or nucleotide sequences which have appeared in the published literature and/or are deposited in and maintained by databanks such as GENBANK, European Molecular Biology Laboratory (EMBL), National Biomedical Research Foundation (NBRF), or other sequence repositories.X Chromosome Inactivation: A dosage compensation process occurring at an early embryonic stage in mammalian development whereby, at random, one X CHROMOSOME of the pair is repressed in the somatic cells of females.Base Sequence: The sequence of PURINES and PYRIMIDINES in nucleic acids and polynucleotides. It is also called nucleotide sequence.Centromere: The clear constricted portion of the chromosome at which the chromatids are joined and by which the chromosome is attached to the spindle during cell division.Chromosomes, Insect: Structures within the CELL NUCLEUS of insect cells containing DNA.Translocation, Genetic: A type of chromosome aberration characterized by CHROMOSOME BREAKAGE and transfer of the broken-off portion to another location, often to a different chromosome.Hybrid Cells: Any cell, other than a ZYGOTE, that contains elements (such as NUCLEI and CYTOPLASM) from two or more different cells, usually produced by artificial CELL FUSION.Meiosis: A type of CELL NUCLEUS division, occurring during maturation of the GERM CELLS. Two successive cell nucleus divisions following a single chromosome duplication (S PHASE) result in daughter cells with half the number of CHROMOSOMES as the parent cells.Chromosome Structures: Structures which are contained in or part of CHROMOSOMES.Chromosomes, Human, 19-20: The short, metacentric human chromosomes, called group F in the human chromosome classification. This group consists of chromosome pairs 19 and 20.Aneuploidy: The chromosomal constitution of cells which deviate from the normal by the addition or subtraction of CHROMOSOMES, chromosome pairs, or chromosome fragments. In a normally diploid cell (DIPLOIDY) the loss of a chromosome pair is termed nullisomy (symbol: 2N-2), the loss of a single chromosome is MONOSOMY (symbol: 2N-1), the addition of a chromosome pair is tetrasomy (symbol: 2N+2), the addition of a single chromosome is TRISOMY (symbol: 2N+1).Metaphase: The phase of cell nucleus division following PROMETAPHASE, in which the CHROMOSOMES line up across the equatorial plane of the SPINDLE APPARATUS prior to separation.Mitosis: A type of CELL NUCLEUS division by means of which the two daughter nuclei normally receive identical complements of the number of CHROMOSOMES of the somatic cells of the species.Recombination, Genetic: Production of new arrangements of DNA by various mechanisms such as assortment and segregation, CROSSING OVER; GENE CONVERSION; GENETIC TRANSFORMATION; GENETIC CONJUGATION; GENETIC TRANSDUCTION; or mixed infection of viruses.Crosses, Genetic: Deliberate breeding of two different individuals that results in offspring that carry part of the genetic material of each parent. The parent organisms must be genetically compatible and may be from different varieties or closely related species.Mutation: Any detectable and heritable change in the genetic material that causes a change in the GENOTYPE and which is transmitted to daughter cells and to succeeding generations.Lod Score: The total relative probability, expressed on a logarithmic scale, that a linkage relationship exists among selected loci. Lod is an acronym for "logarithmic odds."Pedigree: The record of descent or ancestry, particularly of a particular condition or trait, indicating individual family members, their relationships, and their status with respect to the trait or condition.Microsatellite Repeats: A variety of simple repeat sequences that are distributed throughout the GENOME. They are characterized by a short repeat unit of 2-8 basepairs that is repeated up to 100 times. They are also known as short tandem repeats (STRs).Phenotype: The outward appearance of the individual. It is the product of interactions between genes, and between the GENOTYPE and the environment.Alleles: Variant forms of the same gene, occupying the same locus on homologous CHROMOSOMES, and governing the variants in production of the same gene product.Cloning, Molecular: The insertion of recombinant DNA molecules from prokaryotic and/or eukaryotic sources into a replicating vehicle, such as a plasmid or virus vector, and the introduction of the resultant hybrid molecules into recipient cells without altering the viability of those cells.Trisomy: The possession of a third chromosome of any one type in an otherwise diploid cell.Nondisjunction, Genetic: The failure of homologous CHROMOSOMES or CHROMATIDS to segregate during MITOSIS or MEIOSIS with the result that one daughter cell has both of a pair of parental chromosomes or chromatids and the other has none.Kinetochores: Large multiprotein complexes that bind the centromeres of the chromosomes to the microtubules of the mitotic spindle during metaphase in the cell cycle.Chromosomes, Artificial, Human: DNA constructs that are composed of, at least, all elements, such as a REPLICATION ORIGIN; TELOMERE; and CENTROMERE, required for successful replication, propagation to and maintainance in progeny human cells. In addition, they are constructed to carry other sequences for analysis or gene transfer.Nucleic Acid Hybridization: Widely used technique which exploits the ability of complementary sequences in single-stranded DNAs or RNAs to pair with each other to form a double helix. Hybridization can take place between two complimentary DNA sequences, between a single-stranded DNA and a complementary RNA, or between two RNA sequences. The technique is used to detect and isolate specific sequences, measure homology, or define other characteristics of one or both strands. (Kendrew, Encyclopedia of Molecular Biology, 1994, p503)DNA: A deoxyribonucleotide polymer that is the primary genetic material of all cells. Eukaryotic and prokaryotic organisms normally contain DNA in a double-stranded state, yet several important biological processes transiently involve single-stranded regions. DNA, which consists of a polysugar-phosphate backbone possessing projections of purines (adenine and guanine) and pyrimidines (thymine and cytosine), forms a double helix that is held together by hydrogen bonds between these purines and pyrimidines (adenine to thymine and guanine to cytosine).Telomere: A terminal section of a chromosome which has a specialized structure and which is involved in chromosomal replication and stability. Its length is believed to be a few hundred base pairs.Amino Acid Sequence: The order of amino acids as they occur in a polypeptide chain. This is referred to as the primary structure of proteins. It is of fundamental importance in determining PROTEIN CONFORMATION.Chromosome Walking: A technique with which an unknown region of a chromosome can be explored. It is generally used to isolate a locus of interest for which no probe is available but that is known to be linked to a gene which has been identified and cloned. A fragment containing a known gene is selected and used as a probe to identify other overlapping fragments which contain the same gene. The nucleotide sequences of these fragments can then be characterized. This process continues for the length of the chromosome.Chromosomal Proteins, Non-Histone: Nucleoproteins, which in contrast to HISTONES, are acid insoluble. They are involved in chromosomal functions; e.g. they bind selectively to DNA, stimulate transcription resulting in tissue-specific RNA synthesis and undergo specific changes in response to various hormones or phytomitogens.Models, Genetic: Theoretical representations that simulate the behavior or activity of genetic processes or phenomena. They include the use of mathematical equations, computers, and other electronic equipment.Blotting, Southern: A method (first developed by E.M. Southern) for detection of DNA that has been electrophoretically separated and immobilized by blotting on nitrocellulose or other type of paper or nylon membrane followed by hybridization with labeled NUCLEIC ACID PROBES.Sequence Analysis, DNA: A multistage process that includes cloning, physical mapping, subcloning, determination of the DNA SEQUENCE, and information analysis.Genotype: The genetic constitution of the individual, comprising the ALLELES present at each GENETIC LOCUS.Chromosomal Instability: An increased tendency to acquire CHROMOSOME ABERRATIONS when various processes involved in chromosome replication, repair, or segregation are dysfunctional.Spindle Apparatus: A microtubule structure that forms during CELL DIVISION. It consists of two SPINDLE POLES, and sets of MICROTUBULES that may include the astral microtubules, the polar microtubules, and the kinetochore microtubules.Chromosome Fragility: Susceptibility of chromosomes to breakage leading to translocation; CHROMOSOME INVERSION; SEQUENCE DELETION; or other CHROMOSOME BREAKAGE related aberrations.Quantitative Trait Loci: Genetic loci associated with a QUANTITATIVE TRAIT.Haplotypes: The genetic constitution of individuals with respect to one member of a pair of allelic genes, or sets of genes that are closely linked and tend to be inherited together such as those of the MAJOR HISTOCOMPATIBILITY COMPLEX.Chromosome Duplication: An aberration in which an extra chromosome or a chromosomal segment is made.DNA, Satellite: Highly repetitive DNA sequences found in HETEROCHROMATIN, mainly near centromeres. They are composed of simple sequences (very short) (see MINISATELLITE REPEATS) repeated in tandem many times to form large blocks of sequence. Additionally, following the accumulation of mutations, these blocks of repeats have been repeated in tandem themselves. The degree of repetition is on the order of 1000 to 10 million at each locus. Loci are few, usually one or two per chromosome. They were called satellites since in density gradients, they often sediment as distinct, satellite bands separate from the bulk of genomic DNA owing to a distinct BASE COMPOSITION.DNA Probes: Species- or subspecies-specific DNA (including COMPLEMENTARY DNA; conserved genes, whole chromosomes, or whole genomes) used in hybridization studies in order to identify microorganisms, to measure DNA-DNA homologies, to group subspecies, etc. The DNA probe hybridizes with a specific mRNA, if present. Conventional techniques used for testing for the hybridization product include dot blot assays, Southern blot assays, and DNA:RNA hybrid-specific antibody tests. Conventional labels for the DNA probe include the radioisotope labels 32P and 125I and the chemical label biotin. The use of DNA probes provides a specific, sensitive, rapid, and inexpensive replacement for cell culture techniques for diagnosing infections.Polymerase Chain Reaction: In vitro method for producing large amounts of specific DNA or RNA fragments of defined length and sequence from small amounts of short oligonucleotide flanking sequences (primers). The essential steps include thermal denaturation of the double-stranded target molecules, annealing of the primers to their complementary sequences, and extension of the annealed primers by enzymatic synthesis with DNA polymerase. The reaction is efficient, specific, and extremely sensitive. Uses for the reaction include disease diagnosis, detection of difficult-to-isolate pathogens, mutation analysis, genetic testing, DNA sequencing, and analyzing evolutionary relationships.Drosophila melanogaster: A species of fruit fly much used in genetics because of the large size of its chromosomes.Repetitive Sequences, Nucleic Acid: Sequences of DNA or RNA that occur in multiple copies. There are several types: INTERSPERSED REPETITIVE SEQUENCES are copies of transposable elements (DNA TRANSPOSABLE ELEMENTS or RETROELEMENTS) dispersed throughout the genome. TERMINAL REPEAT SEQUENCES flank both ends of another sequence, for example, the long terminal repeats (LTRs) on RETROVIRUSES. Variations may be direct repeats, those occurring in the same direction, or inverted repeats, those opposite to each other in direction. TANDEM REPEAT SEQUENCES are copies which lie adjacent to each other, direct or inverted (INVERTED REPEAT SEQUENCES).Diploidy: The chromosomal constitution of cells, in which each type of CHROMOSOME is represented twice. Symbol: 2N or 2X.Evolution, Molecular: The process of cumulative change at the level of DNA; RNA; and PROTEINS, over successive generations.Genes: A category of nucleic acid sequences that function as units of heredity and which code for the basic instructions for the development, reproduction, and maintenance of organisms.Mosaicism: The occurrence in an individual of two or more cell populations of different chromosomal constitutions, derived from a single ZYGOTE, as opposed to CHIMERISM in which the different cell populations are derived from more than one zygote.Chromatids: Either of the two longitudinally adjacent threads formed when a eukaryotic chromosome replicates prior to mitosis. The chromatids are held together at the centromere. Sister chromatids are derived from the same chromosome. (Singleton & Sainsbury, Dictionary of Microbiology and Molecular Biology, 2d ed)Heterozygote: An individual having different alleles at one or more loci regarding a specific character.Abnormalities, MultiplePolyploidy: The chromosomal constitution of a cell containing multiples of the normal number of CHROMOSOMES; includes triploidy (symbol: 3N), tetraploidy (symbol: 4N), etc.Multigene Family: A set of genes descended by duplication and variation from some ancestral gene. Such genes may be clustered together on the same chromosome or dispersed on different chromosomes. Examples of multigene families include those that encode the hemoglobins, immunoglobulins, histocompatibility antigens, actins, tubulins, keratins, collagens, heat shock proteins, salivary glue proteins, chorion proteins, cuticle proteins, yolk proteins, and phaseolins, as well as histones, ribosomal RNA, and transfer RNA genes. The latter three are examples of reiterated genes, where hundreds of identical genes are present in a tandem array. (King & Stanfield, A Dictionary of Genetics, 4th ed)Polytene Chromosomes: Extra large CHROMOSOMES, each consisting of many identical copies of a chromosome lying next to each other in parallel.DNA Replication: The process by which a DNA molecule is duplicated.Gene Deletion: A genetic rearrangement through loss of segments of DNA or RNA, bringing sequences which are normally separated into close proximity. This deletion may be detected using cytogenetic techniques and can also be inferred from the phenotype, indicating a deletion at one specific locus.DNA-Binding Proteins: Proteins which bind to DNA. The family includes proteins which bind to both double- and single-stranded DNA and also includes specific DNA binding proteins in serum which can be used as markers for malignant diseases.Gene Dosage: The number of copies of a given gene present in the cell of an organism. An increase in gene dosage (by GENE DUPLICATION for example) can result in higher levels of gene product formation. GENE DOSAGE COMPENSATION mechanisms result in adjustments to the level GENE EXPRESSION when there are changes or differences in gene dosage.Interphase: The interval between two successive CELL DIVISIONS during which the CHROMOSOMES are not individually distinguishable. It is composed of the G phases (G1 PHASE; G0 PHASE; G2 PHASE) and S PHASE (when DNA replication occurs).Prophase: The first phase of cell nucleus division, in which the CHROMOSOMES become visible, the CELL NUCLEUS starts to lose its identity, the SPINDLE APPARATUS appears, and the CENTRIOLES migrate toward opposite poles.Genetic Variation: Genotypic differences observed among individuals in a population.Cell Cycle Proteins: Proteins that control the CELL DIVISION CYCLE. This family of proteins includes a wide variety of classes, including CYCLIN-DEPENDENT KINASES, mitogen-activated kinases, CYCLINS, and PHOSPHOPROTEIN PHOSPHATASES as well as their putative substrates such as chromatin-associated proteins, CYTOSKELETAL PROTEINS, and TRANSCRIPTION FACTORS.Loss of Heterozygosity: The loss of one allele at a specific locus, caused by a deletion mutation; or loss of a chromosome from a chromosome pair, resulting in abnormal HEMIZYGOSITY. It is detected when heterozygous markers for a locus appear monomorphic because one of the ALLELES was deleted.Genome, Human: The complete genetic complement contained in the DNA of a set of CHROMOSOMES in a HUMAN. The length of the human genome is about 3 billion base pairs.Polymorphism, Genetic: The regular and simultaneous occurrence in a single interbreeding population of two or more discontinuous genotypes. The concept includes differences in genotypes ranging in size from a single nucleotide site (POLYMORPHISM, SINGLE NUCLEOTIDE) to large nucleotide sequences visible at a chromosomal level.Cytogenetics: A subdiscipline of genetics which deals with the cytological and molecular analysis of the CHROMOSOMES, and location of the GENES on chromosomes, and the movements of chromosomes during the CELL CYCLE.Cytogenetic Analysis: Examination of CHROMOSOMES to diagnose, classify, screen for, or manage genetic diseases and abnormalities. Following preparation of the sample, KARYOTYPING is performed and/or the specific chromosomes are analyzed.Nuclear Proteins: Proteins found in the nucleus of a cell. Do not confuse with NUCLEOPROTEINS which are proteins conjugated with nucleic acids, that are not necessarily present in the nucleus.Genes, X-Linked: Genes that are located on the X CHROMOSOME.Karyotype: The full set of CHROMOSOMES presented as a systematized array of METAPHASE chromosomes from a photomicrograph of a single CELL NUCLEUS arranged in pairs in descending order of size and according to the position of the CENTROMERE. (From Stedman, 25th ed)Cosmids: Plasmids containing at least one cos (cohesive-end site) of PHAGE LAMBDA. They are used as cloning vehicles.Plasmids: Extrachromosomal, usually CIRCULAR DNA molecules that are self-replicating and transferable from one organism to another. They are found in a variety of bacterial, archaeal, fungal, algal, and plant species. They are used in GENETIC ENGINEERING as CLONING VECTORS.Chromosome Fragile Sites: Specific loci that show up during KARYOTYPING as a gap (an uncondensed stretch in closer views) on a CHROMATID arm after culturing cells under specific conditions. These sites are associated with an increase in CHROMOSOME FRAGILITY. They are classified as common or rare, and by the specific culture conditions under which they develop. Fragile site loci are named by the letters "FRA" followed by a designation for the specific chromosome, and a letter which refers to which fragile site of that chromosome (e.g. FRAXA refers to fragile site A on the X chromosome. It is a rare, folic acid-sensitive fragile site associated with FRAGILE X SYNDROME.)Chromatin: The material of CHROMOSOMES. It is a complex of DNA; HISTONES; and nonhistone proteins (CHROMOSOMAL PROTEINS, NON-HISTONE) found within the nucleus of a cell.Gene Rearrangement: The ordered rearrangement of gene regions by DNA recombination such as that which occurs normally during development.Monosomy: The condition in which one chromosome of a pair is missing. In a normally diploid cell it is represented symbolically as 2N-1.Spermatocytes: Male germ cells derived from SPERMATOGONIA. The euploid primary spermatocytes undergo MEIOSIS and give rise to the haploid secondary spermatocytes which in turn give rise to SPERMATIDS.Sex Chromosome Disorders: Clinical conditions caused by an abnormal sex chromosome constitution (SEX CHROMOSOME ABERRATIONS), in which there is extra or missing sex chromosome material (either a whole chromosome or a chromosome segment).Species Specificity: The restriction of a characteristic behavior, anatomical structure or physical system, such as immune response; metabolic response, or gene or gene variant to the members of one species. It refers to that property which differentiates one species from another but it is also used for phylogenetic levels higher or lower than the species.Sequence Tagged Sites: Short tracts of DNA sequence that are used as landmarks in GENOME mapping. In most instances, 200 to 500 base pairs of sequence define a Sequence Tagged Site (STS) that is operationally unique in the human genome (i.e., can be specifically detected by the polymerase chain reaction in the presence of all other genomic sequences). The overwhelming advantage of STSs over mapping landmarks defined in other ways is that the means of testing for the presence of a particular STS can be completely described as information in a database.Polymorphism, Single Nucleotide: A single nucleotide variation in a genetic sequence that occurs at appreciable frequency in the population.Polymorphism, Restriction Fragment Length: Variation occurring within a species in the presence or length of DNA fragment generated by a specific endonuclease at a specific site in the genome. Such variations are generated by mutations that create or abolish recognition sites for these enzymes or change the length of the fragment.DNA, Bacterial: Deoxyribonucleic acid that makes up the genetic material of bacteria.Cell Line: Established cell cultures that have the potential to propagate indefinitely.Genetic Predisposition to Disease: A latent susceptibility to disease at the genetic level, which may be activated under certain conditions.Genes, Dominant: Genes that influence the PHENOTYPE both in the homozygous and the heterozygous state.Saccharomyces cerevisiae: A species of the genus SACCHAROMYCES, family Saccharomycetaceae, order Saccharomycetales, known as "baker's" or "brewer's" yeast. The dried form is used as a dietary supplement.DNA Transposable Elements: Discrete segments of DNA which can excise and reintegrate to another site in the genome. Most are inactive, i.e., have not been found to exist outside the integrated state. DNA transposable elements include bacterial IS (insertion sequence) elements, Tn elements, the maize controlling elements Ac and Ds, Drosophila P, gypsy, and pogo elements, the human Tigger elements and the Tc and mariner elements which are found throughout the animal kingdom.Genes, Recessive: Genes that influence the PHENOTYPE only in the homozygous state.Philadelphia Chromosome: An aberrant form of human CHROMOSOME 22 characterized by translocation of the distal end of chromosome 9 from 9q34, to the long arm of chromosome 22 at 22q11. It is present in the bone marrow cells of 80 to 90 per cent of patients with chronic myelocytic leukemia (LEUKEMIA, MYELOGENOUS, CHRONIC, BCR-ABL POSITIVE).Azure Stains: PHENOTHIAZINES with an amino group at the 3-position that are green crystals or powder. They are used as biological stains.Sequence Homology, Nucleic Acid: The sequential correspondence of nucleotides in one nucleic acid molecule with those of another nucleic acid molecule. Sequence homology is an indication of the genetic relatedness of different organisms and gene function.Cell Nucleus: Within a eukaryotic cell, a membrane-limited body which contains chromosomes and one or more nucleoli (CELL NUCLEOLUS). The nuclear membrane consists of a double unit-type membrane which is perforated by a number of pores; the outermost membrane is continuous with the ENDOPLASMIC RETICULUM. A cell may contain more than one nucleus. (From Singleton & Sainsbury, Dictionary of Microbiology and Molecular Biology, 2d ed)Chromosomes, Archaeal: Structures within the nucleus of archaeal cells consisting of or containing DNA, which carry genetic information essential to the cell.Genome: The genetic complement of an organism, including all of its GENES, as represented in its DNA, or in some cases, its RNA.Contig Mapping: Overlapping of cloned or sequenced DNA to construct a continuous region of a gene, chromosome or genome.Homozygote: An individual in which both alleles at a given locus are identical.Chromosome Breakpoints: The locations in specific DNA sequences where CHROMOSOME BREAKS have occurred.Ploidies: The degree of replication of the chromosome set in the karyotype.Haploidy: The chromosomal constitution of cells, in which each type of CHROMOSOME is represented once. Symbol: N.Phylogeny: The relationships of groups of organisms as reflected by their genetic makeup.Escherichia coli: A species of gram-negative, facultatively anaerobic, rod-shaped bacteria (GRAM-NEGATIVE FACULTATIVELY ANAEROBIC RODS) commonly found in the lower part of the intestine of warm-blooded animals. It is usually nonpathogenic, but some strains are known to produce DIARRHEA and pyogenic infections. Pathogenic strains (virotypes) are classified by their specific pathogenic mechanisms such as toxins (ENTEROTOXIGENIC ESCHERICHIA COLI), etc.Sequence Homology, Amino Acid: The degree of similarity between sequences of amino acids. This information is useful for the analyzing genetic relatedness of proteins and species.Genetic Loci: Specific regions that are mapped within a GENOME. Genetic loci are usually identified with a shorthand notation that indicates the chromosome number and the position of a specific band along the P or Q arm of the chromosome where they are found. For example the locus 6p21 is found within band 21 of the P-arm of CHROMOSOME 6. Many well known genetic loci are also known by common names that are associated with a genetic function or HEREDITARY DISEASE.Sex Chromatin: In the interphase nucleus, a condensed mass of chromatin representing an inactivated X chromosome. Each X CHROMOSOME, in excess of one, forms sex chromatin (Barr body) in the mammalian nucleus. (from King & Stansfield, A Dictionary of Genetics, 4th ed)Hybridization, Genetic: The genetic process of crossbreeding between genetically dissimilar parents to produce a hybrid.Genomic Imprinting: The variable phenotypic expression of a GENE depending on whether it is of paternal or maternal origin, which is a function of the DNA METHYLATION pattern. Imprinted regions are observed to be more methylated and less transcriptionally active. (Segen, Dictionary of Modern Medicine, 1992)DNA Primers: Short sequences (generally about 10 base pairs) of DNA that are complementary to sequences of messenger RNA and allow reverse transcriptases to start copying the adjacent sequences of mRNA. Primers are used extensively in genetic and molecular biology techniques.Transcription, Genetic: The biosynthesis of RNA carried out on a template of DNA. The biosynthesis of DNA from an RNA template is called REVERSE TRANSCRIPTION.Gene Duplication: Processes occurring in various organisms by which new genes are copied. Gene duplication may result in a MULTIGENE FAMILY; supergenes or PSEUDOGENES.Intellectual Disability: Subnormal intellectual functioning which originates during the developmental period. This has multiple potential etiologies, including genetic defects and perinatal insults. Intelligence quotient (IQ) scores are commonly used to determine whether an individual has an intellectual disability. IQ scores between 70 and 79 are in the borderline range. Scores below 67 are in the disabled range. (from Joynt, Clinical Neurology, 1992, Ch55, p28)Drosophila: A genus of small, two-winged flies containing approximately 900 described species. These organisms are the most extensively studied of all genera from the standpoint of genetics and cytology.Gene Amplification: A selective increase in the number of copies of a gene coding for a specific protein without a proportional increase in other genes. It occurs naturally via the excision of a copy of the repeating sequence from the chromosome and its extrachromosomal replication in a plasmid, or via the production of an RNA transcript of the entire repeating sequence of ribosomal RNA followed by the reverse transcription of the molecule to produce an additional copy of the original DNA sequence. Laboratory techniques have been introduced for inducing disproportional replication by unequal crossing over, uptake of DNA from lysed cells, or generation of extrachromosomal sequences from rolling circle replication.Genes, Lethal: Genes whose loss of function or gain of function MUTATION leads to the death of the carrier prior to maturity. They may be essential genes (GENES, ESSENTIAL) required for viability, or genes which cause a block of function of an essential gene at a time when the essential gene function is required for viability.Genes, Bacterial: The functional hereditary units of BACTERIA.Genome, Plant: The genetic complement of a plant (PLANTS) as represented in its DNA.Syndrome: A characteristic symptom complex.Sequence Alignment: The arrangement of two or more amino acid or base sequences from an organism or organisms in such a way as to align areas of the sequences sharing common properties. The degree of relatedness or homology between the sequences is predicted computationally or statistically based on weights assigned to the elements aligned between the sequences. This in turn can serve as a potential indicator of the genetic relatedness between the organisms.DNA, Neoplasm: DNA present in neoplastic tissue.Sister Chromatid Exchange: An exchange of segments between the sister chromatids of a chromosome, either between the sister chromatids of a meiotic tetrad or between the sister chromatids of a duplicated somatic chromosome. Its frequency is increased by ultraviolet and ionizing radiation and other mutagenic agents and is particularly high in BLOOM SYNDROME.Pachytene Stage: The stage in the first meiotic prophase, following ZYGOTENE STAGE, when CROSSING OVER between homologous CHROMOSOMES begins.Chromosomes, Artificial: DNA constructs that are composed of, at least, elements such as a REPLICATION ORIGIN; TELOMERE; and CENTROMERE, that are required for successful replication, propagation to and maintenance in progeny cells. In addition, they are constructed to carry other sequences for analysis or gene transfer.Exons: The parts of a transcript of a split GENE remaining after the INTRONS are removed. They are spliced together to become a MESSENGER RNA or other functional RNA.Microtubules: Slender, cylindrical filaments found in the cytoskeleton of plant and animal cells. They are composed of the protein TUBULIN and are influenced by TUBULIN MODULATORS.Histones: Small chromosomal proteins (approx 12-20 kD) possessing an open, unfolded structure and attached to the DNA in cell nuclei by ionic linkages. Classification into the various types (designated histone I, histone II, etc.) is based on the relative amounts of arginine and lysine in each.DNA, Fungal: Deoxyribonucleic acid that makes up the genetic material of fungi.Genes, Y-Linked: Genes that are located on the Y CHROMOSOME.Triticum: A plant genus of the family POACEAE that is the source of EDIBLE GRAIN. A hybrid with rye (SECALE CEREALE) is called TRITICALE. The seed is ground into FLOUR and used to make BREAD, and is the source of WHEAT GERM AGGLUTININS.Euchromatin: Chromosome regions that are loosely packaged and more accessible to RNA polymerases than HETEROCHROMATIN. These regions also stain differentially in CHROMOSOME BANDING preparations.Sex Determination Processes: The mechanisms by which the SEX of an individual's GONADS are fixed.DNA, Plant: Deoxyribonucleic acid that makes up the genetic material of plants.DNA, Complementary: Single-stranded complementary DNA synthesized from an RNA template by the action of RNA-dependent DNA polymerase. cDNA (i.e., complementary DNA, not circular DNA, not C-DNA) is used in a variety of molecular cloning experiments as well as serving as a specific hybridization probe.Genes, Tumor Suppressor: Genes that inhibit expression of the tumorigenic phenotype. They are normally involved in holding cellular growth in check. When tumor suppressor genes are inactivated or lost, a barrier to normal proliferation is removed and unregulated growth is possible.Aurora Kinases: A family of highly conserved serine-threonine kinases that are involved in the regulation of MITOSIS. They are involved in many aspects of cell division, including centrosome duplication, SPINDLE APPARATUS formation, chromosome alignment, attachment to the spindle, checkpoint activation, and CYTOKINESIS.Genes, Insect: The functional hereditary units of INSECTS.Down Syndrome: A chromosome disorder associated either with an extra chromosome 21 or an effective trisomy for chromosome 21. Clinical manifestations include hypotonia, short stature, brachycephaly, upslanting palpebral fissures, epicanthus, Brushfield spots on the iris, protruding tongue, small ears, short, broad hands, fifth finger clinodactyly, Simian crease, and moderate to severe INTELLECTUAL DISABILITY. Cardiac and gastrointestinal malformations, a marked increase in the incidence of LEUKEMIA, and the early onset of ALZHEIMER DISEASE are also associated with this condition. Pathologic features include the development of NEUROFIBRILLARY TANGLES in neurons and the deposition of AMYLOID BETA-PROTEIN, similar to the pathology of ALZHEIMER DISEASE. (Menkes, Textbook of Child Neurology, 5th ed, p213)Quantitative Trait, Heritable: A characteristic showing quantitative inheritance such as SKIN PIGMENTATION in humans. (From A Dictionary of Genetics, 4th ed)Transcription Factors: Endogenous substances, usually proteins, which are effective in the initiation, stimulation, or termination of the genetic transcription process.Meiotic Prophase I: The prophase of the first division of MEIOSIS (in which homologous CHROMOSOME SEGREGATION occurs). It is divided into five stages: leptonema, zygonema, PACHYNEMA, diplonema, and diakinesis.Turner Syndrome: A syndrome of defective gonadal development in phenotypic females associated with the karyotype 45,X (or 45,XO). Patients generally are of short stature with undifferentiated GONADS (streak gonads), SEXUAL INFANTILISM, HYPOGONADISM, webbing of the neck, cubitus valgus, elevated GONADOTROPINS, decreased ESTRADIOL level in blood, and CONGENITAL HEART DEFECTS. NOONAN SYNDROME (also called Pseudo-Turner Syndrome and Male Turner Syndrome) resembles this disorder; however, it occurs in males and females with a normal karyotype and is inherited as an autosomal dominant.Gene Library: A large collection of DNA fragments cloned (CLONING, MOLECULAR) from a given organism, tissue, organ, or cell type. It may contain complete genomic sequences (GENOMIC LIBRARY) or complementary DNA sequences, the latter being formed from messenger RNA and lacking intron sequences.Radiation Hybrid Mapping: A method for ordering genetic loci along CHROMOSOMES. The method involves fusing irradiated donor cells with host cells from another species. Following cell fusion, fragments of DNA from the irradiated cells become integrated into the chromosomes of the host cells. Molecular probing of DNA obtained from the fused cells is used to determine if two or more genetic loci are located within the same fragment of donor cell DNA.Genetic Heterogeneity: The presence of apparently similar characters for which the genetic evidence indicates that different genes or different genetic mechanisms are involved in different pedigrees. In clinical settings genetic heterogeneity refers to the presence of a variety of genetic defects which cause the same disease, often due to mutations at different loci on the same gene, a finding common to many human diseases including ALZHEIMER DISEASE; CYSTIC FIBROSIS; LIPOPROTEIN LIPASE DEFICIENCY, FAMILIAL; and POLYCYSTIC KIDNEY DISEASES. (Rieger, et al., Glossary of Genetics: Classical and Molecular, 5th ed; Segen, Dictionary of Modern Medicine, 1992)
Premature chromosome condensation: Premature chromosome condensation (PCC) occurs in eukaryotic organisms when mitotic cells fuse with interphase cells. Chromatin, a substance that contains genetic material such as DNA, is normally found in a loose bundle inside a cell's nucleus.Chromosome regionsSmith–Fineman–Myers syndrome: Smith–Fineman–Myers syndrome (SFMS1), also called X-linked mental retardation-hypotonic facies syndrome 1 (MRXHF1), Carpenter–Waziri syndrome, Chudley–Lowry syndrome, SFMS, Holmes–Gang syndrome and Juberg–Marsidi syndrome (JMS), is a rare X-linked recessive congenital disorder that causes birth defects. This syndrome was named after 3 men, Richard D.Genetic imbalance: Genetic imbalance is to describe situation when the genome of a cell or organism has more copies of some genes than other genes due to chromosomal rearrangements or aneuploidy.Circular bacterial chromosome: A circular bacterial chromosome is a bacterial chromosome in the form of a molecule of circular DNA. Unlike the linear DNA of most eukaryotes, typical bacterial chromosomes are circular.Immortal DNA strand hypothesis: The immortal DNA strand hypothesis was proposed in 1975 by John Cairns as a mechanism for adult stem cells to minimize mutations in their genomes.Cairns, J.Transient neonatal diabetes mellitusGenetic linkage: Genetic linkage is the tendency of alleles that are located close together on a chromosome to be inherited together during the meiosis phase of sexual reproduction. Genes whose loci are nearer to each other are less likely to be separated onto different chromatids during chromosomal crossover, and are therefore said to be genetically linked.Ring chromosome: A ring chromosome is a chromosome whose arms have fused together to form a ring. Ring chromosomes were first discovered by Lilian Vaughan Morgan in 1926.John Payne ToddColes PhillipsSymmetry element: A symmetry element is a point of reference about which symmetry operations can take place. In particular, symmetry elements can be centers of inversion, axes of rotation and mirror planes.CentromereOncogene: An oncogene is a gene that has the potential to cause cancer.Wilbur, Beth, editor.CP 55,940Metaphase: Metaphase (from the Greek μετά, "adjacent" and φάσις, "stage") is a stage of mitosis in the eukaryotic cell cycle in which chromosomes are at their second-most condensed and coiled stage (they are at their most condensed in anaphase. These chromosomes, carrying genetic information, align in the equator of the cell before being separated into each of the two daughter cells.Bookmarking: Bookmarking (also "gene bookmarking" or "mitotic bookmarking") refers to a potential mechanism of transmission of gene expression programs through cell division.Recombination (cosmology): In cosmology, recombination refers to the epoch at which charged electrons and protons first became bound to form electrically neutral hydrogen atoms.Note that the term recombination is a misnomer, considering that it represents the first time that electrically neutral hydrogen formed.Silent mutation: Silent mutations are mutations in DNA that do not significantly alter the phenotype of the organism in which they occur. Silent mutations can occur in non-coding regions (outside of genes or within introns), or they may occur within exons.Pedigree chart: A pedigree chart is a diagram that shows the occurrence and appearance or phenotypes of a particular gene or organism and its ancestors from one generation to the next,pedigree chart Genealogy Glossary - About.com, a part of The New York Times Company.Microsatellite: A microsatellite is a tract of repetitive DNA in which certain DNA motifs (ranging in length from 2–5 base pairs) are repeated, typically 5-50 times. Microsatellites occur at thousands of locations in the human genome and they are notable for their high mutation rate and high diversity in the population.Phenotype microarray: The phenotype microarray approach is a technology for high-throughput phenotyping of cells.Infinite alleles model: The infinite alleles model is a mathematical model for calculating genetic mutations. The Japanese geneticist Motoo Kimura and American geneticist James F.Ligation-independent cloning: Ligation-independent cloning (LIC) is a form of molecular cloning that is able to be performed without the use of restriction endonucleases or DNA ligase. This allows genes that have restriction sites to be cloned without worry of chopping up the insert.Trisomy 9Kinetochore: The kinetochore is the protein structure on chromatids where the spindle fibers attach during cell division to pull sister chromatids apart.DNA condensation: DNA condensation refers to the process of compacting DNA molecules in vitro or in vivo. Mechanistic details of DNA packing are essential for its functioning in the process of gene regulation in living systems.Telomere: A telomere is a region of repetitive nucleotide sequences at each end of a chromosome, which protects the end of the chromosome from deterioration or from fusion with neighboring chromosomes. Its name is derived from the Greek nouns telos (τέλος) 'end' and merοs (μέρος, root: μερ-) 'part.Protein primary structure: The primary structure of a peptide or protein is the linear sequence of its amino acid structural units, and partly comprises its overall biomolecular structure. By convention, the primary structure of a protein is reported starting from the amino-terminal (N) end to the carboxyl-terminal (C) end.Ogre (2008 film): Ogre is a 2008 American television horror film directed by Steven R. Monroe.Chromo shadow domain: In molecular biology, the chromo shadow domain is a protein domain which is distantly related to the chromodomain. It is always found in association with a chromodomain.DNA sequencer: A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine).Spindle apparatus: In cell biology, the spindle apparatus refers to the subcellular structure of eukaryotic cells that separates chromosomes between daughter cells during cell division. It is also referred to as the mitotic spindle during mitosis, a process that produces genetically identical daughter cells, or the meiotic spindle during meiosis, a process that produces gametes with half the number of chromosomes of the parent cell.Satellite DNA: Satellite DNA consists of very large arrays of tandemly repeating, non-coding DNA. Satellite DNA is the main component of functional centromeres, and form the main structural constituent of heterochromatin.Thermal cycler
(1/1094) ARX, a novel Prd-class-homeobox gene highly expressed in the telencephalon, is mutated in X-linked mental retardation.
Investigation of a critical region for an X-linked mental retardation (XLMR) locus led us to identify a novel Aristaless related homeobox gene (ARX ). Inherited and de novo ARX mutations, including missense mutations and in frame duplications/insertions leading to expansions of polyalanine tracts in ARX, were found in nine familial and one sporadic case of MR. In contrast to other genes involved in XLMR, ARX expression is specific to the telencephalon and ventral thalamus. Notably there is an absence of expression in the cerebellum throughout development and also in adult. The absence of detectable brain malformations in patients suggests that ARX may have an essential role, in mature neurons, required for the development of cognitive abilities. (+info)
(2/1094) Chromosome abnormalities in sperm from infertile men with asthenoteratozoospermia.
Research over the past few years has clearly demonstrated that infertile men have an increased frequency of chromosome abnormalities in their sperm. These studies have been further corroborated by an increased frequency of chromosome abnormalities in newborns and fetuses from pregnancies established by intracytoplasmic sperm injection. Most studies have considered men with any type of infertility. However, it is possible that some types of infertility have an increased risk of sperm chromosome abnormalities, whereas others do not. We studied 10 men with a specific type of infertility, asthenozoospermia (poor motility), by multicolor fluorescence in situ hybridization analysis to determine whether they had an increased frequency of disomy for chromosomes 13, 21, XX, YY, and XY, as well as diploidy. The patients ranged in age from 28 to 42 yr (mean 34.1 yr); they were compared with 18 normal control donors whose ages ranged from 23 to 58 yr (mean 35.6 yr). A total of 201 416 sperm were analyzed in the men with asthenozoospermia, with a minimum of 10 000 sperm analyzed per chromosome probe per donor. There was a significant increase in the frequency of disomy in men with asthenozoospermia compared with controls for chromosomes 13 and XX. Thus, this study indicates that infertile men with poorly motile sperm but normal concentration have a significantly increased frequency of sperm chromosome abnormalities. (+info)
(3/1094) MOUSE (Mitochondrial and Other Useful SEquences) a compilation of population genetic markers.
Mitochondrial and Other Useful SEquences (MOUSE) is an integrated and comprehensive compilation of mtDNA from hypervariable regions I and II and of the low recombining nuclear loci Xq13.3 from about 11 200 humans and great apes, whose geographic and if applicable, linguistic classification is stored with their aligned sequences and publication details. The goal is to provide population geneticists and genetic epidemiologists with a comprehensive and user friendly repository of sequences and population information that is usually dispersed in a variety of other sources. AVAILABILITY: http://www.gen-epi.de/mouse. SUPPLEMENTARY INFORMATION: Documentation and detailed information on population subgroups is available on the homepage: http://www.gen-epi.de/mouse (+info)
(4/1094) Bipolar disorder susceptibility region on Xq24-q27.1 in Finnish families.
Bipolar disorder (BPD) is a common disorder characterized by episodes of mania, hypomania and depression. The genetic background of BPD remains undefined, although several putative loci predisposing to BPD have been identified. We have earlier reported significant evidence of linkage for BPD to chromosome Xq24-q27.1 in an extended pedigree from the late settlement region of the genetically isolated population of Finland. Further, we established a distinct chromosomal haplotype covering a 19 cM region on Xq24-q27.1 co-segregating with the disorder. Here, we have further analyzed this X-chromosomal region using a denser marker map and monitored X-chromosomal haplotypes in a study sample of 41 Finnish bipolar families. Only a fraction of the families provided any evidence of linkage to this region, suggesting that a relatively rare gene predisposing to BPD is enriched in this linked pedigree. The genome-wide scan for BPD predisposing loci in this large pedigree indicated that this particular X-chromosomal region provides the best evidence of linkage genome-wide, suggesting an X-chromosomal gene with a major role for the genetic predisposition of BPD in this family. (+info)
(5/1094) Sperm aneuploidy rates in younger and older men.
BACKGROUND: In order to assess the possible risk of chromosomal abnormalities in offspring from older fathers, we investigated the effects of age on the frequency of chromosomal aneuploidy rates of human sperm. METHODS AND RESULTS: Semen samples were collected from 15 men aged <30 years (24.8 +/- 2.4 years) and from eight men aged >60 years (65.3 +/- 3.9 years) from the general population. No significant differences in ejaculate volume, sperm concentration and sperm morphology were found, whereas sperm motility was significantly lower in older men (P = 0.002). For the hormone values, only FSH was significantly elevated in the older men (P = 0.004). Multicolour fluorescence in-situ hybridization was used to determine the aneuploidy frequencies of two autosomes (9 and 18); and of both sex chromosomes using directly labelled satellite DNA probes on decondensed sperm nuclei. A minimum of 8000 sperm per donor and >330 000 sperm in total were evaluated. The disomy rates per analysed chromosomes were 0.1-2.3% in younger men and 0.1-1.8% in older men. The aneuploidy rate determined for both sex chromosomes and for the autosomes 9 and 18 were not significantly different between the age groups. CONCLUSIONS: The results suggest that men of advanced age still wanting to become fathers do not have a significantly higher risk of procreating offspring with chromosomal abnormalities compared with younger men. (+info)
(6/1094) Meta-analysis of genotype-phenotype correlation in X-linked Alport syndrome: impact on clinical counselling.
BACKGROUND: Alport syndrome (AS) is a hereditary nephropathy characterized by progressive renal failure, hearing loss and ocular lesions. Numerous mutations of the COL4A5 gene encoding the alpha 5-chain of type IV collagen have been described, establishing the molecular cause of AS. The goal of the present study was to identify the genotype-phenotype correlations that are helpful in clinical counseling. COL4A5-mutations (n=267) in males were analysed including 23 German Alport families. METHODS: Exons of the COL4A5 gene were PCR-amplified and screened by Southern blot, direct sequencing or denaturing gradient gel electrophoresis. Phenotypes were obtained by questionnaires or extracted from 44 publications in the literature. Data were analysed by Kaplan-Meier statistics, chi(2) and Kruskal-Wallis tests. RESULTS: Genotype-phenotype data for 23 German Alport families are reported. Analysis of these data and of mutations published in the literature showed the type of mutation being a significant predictor of end-stage renal failure (ESRF) age. The patients' renal phenotypes could be grouped into three cohorts: (1) large rearrangements, frame shift, nonsense, and splice donor mutations had a mean ESRF age of 19.8+/-5.7 years; (2) non-glycine- or 3' glycine-missense mutations, in-frame deletions/insertions and splice acceptor mutations had a mean ESRF age of 25.7+/-7.2 years and fewer extrarenal symptoms; (3) 5' glycine substitutions had an even later onset of ESRF at 30.1+/-7.2 years. Glycine-substitutions occurred less commonly de novo than all other mutations (5.5% vs 13.9%). However, due to the evolutionary advantage of their moderate phenotype, they were the most common mutations. The intrafamilial phenotype of an individual mutation was found to be very consistent with regards to the manifestation of deafness, lenticonus and the time point of onset of ESRF. CONCLUSIONS: Knowledge of the mutation adds significant information about the progress of renal and extrarenal disease in males with X-linked AS. We suggest that the considerable prognostic relevance of a patient's genotype should be included in the classification of the Alport phenotype. (+info)
(7/1094) Low frequency of MECP2 mutations in mentally retarded males.
A high frequency of mutations in the methyl CpG-binding protein 2 (MECP2) gene has recently been reported in males with nonspecific X-linked mental retardation. The results of this previous study suggested that the frequency of MECP2 mutations in the mentally retarded population was comparable to that of CGG expansions in FMR1. In view of these data, we performed MECP2 mutation analysis in a cohort of 475 mentally retarded males who were negative for FMR1 CGG repeat expansion. Five novel changes, detected in seven patients, were predicted to change the MECP2 coding sequence. Except for one, these changes were not found in a control population. While this result appeared to suggest a high mutation rate, this conclusion was not supported by segregation studies. Indeed, three of the five changes could be traced in unaffected male family members. For another change, segregation analysis in the family was not possible. Only one mutation, a frameshift created by a deletion of two bases, was found to be de novo. This study clearly shows the importance of segregation analysis for low frequency mutations, in order to distinguish them from rare polymorphisms. The true frequency of MECP2 mutations in the mentally retarded has probably been overestimated. Based on our data, the frequency of MECP2 mutations in mentally retarded males is 0.2% (1/475). (+info)
(8/1094) Species-specific subcellular localization of RPGR and RPGRIP isoforms: implications for the phenotypic variability of congenital retinopathies among species.
The retinitis pigmentosa GTPase regulator (RPGR) is encoded by the X-linked RP3 locus, which upon genetic lesions leads to neurodegeneration of photoreceptors and blindness. The findings that RPGR specifically and directly interacts in vivo and in vitro with retina-specific RPGR-interacting protein 1 (RPGRIP) and that human mutations in RPGR uncouple its interaction with RPGRIP provided the first clue for the retina-specific pathogenesis of X-linked RP3. Recently, mutations in RPGRIP were found to lead to the retinal dystrophy, Leber congenital amaurosis. However, mouse models null for RPGR had, surprisingly, a very mild phenotype compared with those observed in XlRP3-affected humans and dogs. Moreover, recent reports are seemingly in disagreement on the localization of RPGR and RPGRIP in photoreceptors. These discrepancies were compounded with the finding of RPGR mutations leading exclusively to X-linked cone dystrophy. To resolve these discrepancies and to gain further insight into the pathology associated with RPGR- and RPGRIP-allied retinopathies, we now show, using several isoform-specific antibodies, that RPGR and RPGRIP isoforms are distributed and co-localized at restricted foci throughout the outer segments of human and bovine, but not mice rod photoreceptors. In humans, they also localize in cone outer segments. RPGRIP is also expressed in other neurons such as amacrine cells. Thus, the data lend support to the existence of species-specific subcellular processes governing the function and/or organization of the photoreceptor outer segment as reflected by the species-specific localization of RPGR and RPGRIP protein isoforms in this compartment, and provide a rationale for the disparity of phenotypes among species and in the human. (+info) |
Climate change is largely caused by human activities, such as large amounts of greenhouse gas emissions, and will affect human health in a negative way. A new Swedish study, based on a survey among 5,000 women and men of different ages, shows that it is possible to eat in a way that is smart for both humans and the environment.
Katarina Bälter (Professor of Public Health at Mälardalens University in Sweden) investigated whether it is possible to get all the nutrients we need from food habits that generate low greenhouse gas emissions, compared to food habits that are associated with high greenhouse gas emissions. The results show that there are people whose food habits have a low carbon footprint and at the same time adhere to the dietary guidelines in the Nordic Nutrition Recommendations.
Climate change is an urgent global issue and the food sector is an important contributor to greenhouse gas emissions. Between 2030 and 2050, climate change (caused by global warming) is expected to cause approximately 250,000 additional deaths per year, from malnutrition, malaria, diarrhoea and heat stress. Climate change will affect human health in the future. Therefore it is important to reduce emissions of greenhouse gases in all sectors of society and we can all contribute in different ways.
The aim of this research is to draw attention to the fact that climate friendly food habits can also be healthy food habits and can contribute to a sustainable and healthy lifestyle, says Katarina Bälter.
This study showed that there are large differences in diet-related greenhouse gas emissions. Men and older people tend to have food habits that generate higher emissions than women and younger people. Beef, pork and dairy products generate the most greenhouse gas emissions while plant-based foods such as roots, beans, grains, vegetables and fruit generates the least. But it is also important to think about eating seasonally, as fresh fruit can have a high climate impact if transported by air from the other side of the globe. “Overall it is possible to eat both climate friendly and nutritious” says Katarina Bälter.
In the next steps of the study, Katarina Bälter will look more closely at the link between food habits with low climate impact and human health. |
Kids, Fruits and Veggies: A Pediatrician’s Tips
Posted on Dec 10, 2012 | 0 comments
Fruits and vegetables are an important part of a healthy diet, but getting our children to eat the right things can feel like pulling teeth. What should you do when your son gags at the sight of broccoli, or your daughter reaches for soda and candy instead of fruit?
Or, at the opposite end of the spectrum, what should you do if your child visits a local farm and vows never to eat a cow, pig or chicken again? Is it possible for a child to be healthy on a vegetarian diet?
Pediatrician Swati Pandya, M.D., answers parents’ questions about the importance of fruits and vegetables in children’s diets, how to encourage kids to eat more of these health boosters and meeting the dietary needs of young vegetarians.
How many fruits and vegetables should my child eat?
According to U.S. Department of Agriculture (USDA) guidelines, half of our daily diet should consist of fruits and vegetables, combined with smaller portions of grains, protein and dairy.
However, according to the U.S. Centers for Disease Control and Prevention, the majority of Americans do not eat nearly enough fruits and vegetables. As our kids get older, the problem worsens. A 2009 CDC report states that only 32 percent of high school students eat at least two servings of fruit daily and a mere 13 percent eat at least three servings of vegetables every day.
Why are fruits and vegetables so important, and are some better than others?
Fruits and veggies are important for many reasons. First, they contain nutrients that are difficult to find in other food sources, including folate; magnesium; potassium; dietary fiber; and vitamins A, C and K. Second, they reduce the risk of many chronic diseases. Third, they are naturally low in calories when prepared without adding fats or sugars – which helps maintain a healthy weight.
The most nutrient-rich vegetables are dark green, such as broccoli, spinach, collards, and turnip greens. Bright red and orange vegetables, such as tomatoes, pumpkins, sweet potatoes, and red peppers, also tend to have a lot of nutrients. It’s best to eat whole fruits or fresh canned, frozen and dried fruits as opposed to juice, which often contains fewer nutrients and more sugar and other additives.
How can I get my child to eat more fruits and veggies?
There are several ways you can serve up more fruits and veggies. Here are a few tips you can try:
- Serve them when your child is hungry. When your child gets home from school and is ravenous, put out a plate of veggies, such as carrots, bell peppers and cucumbers – along with hummus or another healthy dip.
- Encourage your child to try new fruits and veggies. If your child is resistant to trying new foods, implement the “try it first, then you can say no” rule. Even if your child rejects the new food after the first one or two tries, he or she may grow to like it over time.
- Market the health benefits of fruits and veggies using fun kid language. For example, tell your child carrots will give her “superhero vision” or spinach will make him “strong like Popeye.”
- Engage your kids in shopping and cooking. Many children enjoy making their own food choices and helping to prepare meals – and they are often more likely to eat what they select and cook.
- Make fruits and vegetables easy to access. Keep a bowl of fruit on the kitchen counter and stock your fridge with chopped-up, easy-to-grab veggies so that when your child goes hunting for a snack, healthy options are readily available.
- Find ways to add more fruits and veggies to your child’s favorite meals. For example, add pureed veggies to quesadillas and add chopped bananas to hot or cold cereals.
- Be a positive role model. Your child is watching everything you do, so be sure to eat a variety of fruits and vegetables as part of your daily diet.
My child wants to be a vegetarian, but I’ve heard many definitions and am not sure what it means. What are the different types of vegetarianism?
Your child may be exploring a vegetarian diet for many reasons, including concern for animals, the environment or his or her own health. People use the term vegetarian to describe a number of diets, but here are a few of the most common types:
- Vegans eat only food from plant sources.
- Ovo-vegetarians eat eggs, but no meat.
- Lacto-ovo vegetarians eat dairy and egg products, but no meat.
- Lacto-vegetarians eat dairy products, but no eggs or meat.
- Semi-vegetarians eat poultry or fish, but no red meat.
Is it possible for my child to be healthy as a vegetarian?
The simple answer is yes. In reality, the answer is a bit more complicated. Children can get all the nutrients they need from a vegetarian diet – but only if the diet is well planned and balanced. This is especially important if the diet doesn’t include dairy and egg products. And you’ll need to adjust the diet to meet the changing nutritional requirements of your child as he or she grows.
If your child or teen is going to become a vegetarian and you are unfamiliar with this diet, talk to your child’s doctor or a registered dietitian who can help you plan a healthy vegetarian diet. It’s important for the diet to include enough calories and nutrients for each stage of your child’s growth and development.
A properly planned vegetarian diet is high in fiber and low in fat. This offers many health benefits, including better cardiovascular health, lower blood cholesterol and reduced risk of obesity.
What foods should I encourage my child to eat as part of a healthy vegetarian diet?
To reap the health benefits of a vegetarian diet, your child should consume adequate calories and the following nutrients in the proper amounts for his or her stage of development:
- Protein, found in dairy products, eggs, soy products, beans and nuts
- Calcium, found in dairy products; broccoli; dark green leafy vegetables; dried beans; and calcium-fortified products such as orange juice, soy and rice drinks, and cereals
- Vitamin B12, found in dairy products; eggs; and vitamin-fortified products such as cereals, breads, and soy and rice drinks
- Vitamin D, found in milk and vitamin D-fortified products such as orange juice
- Iron, found in eggs, whole grains, leafy green vegetables, dried beans, dried fruits, and iron-fortified cereals and bread
- Zinc, found in wheat germ, nuts, dried beans, and fortified cereal
No matter how you dice it, eating lots of fruits and vegetables is important to your child’s health. So start buying, chopping, and serving up those fruits and veggies as much as possible to help your child enjoy a longer, healthier life. |
This page uses content from Wikipedia and is licensed under CC BY-SA.
Theatre or theater is a collaborative form of fine art that uses live performers, typically actors or actresses, to present the experience of a real or imagined event before a live audience in a specific place, often a stage. The performers may communicate this experience to the audience through combinations of gesture, speech, song, music, and dance. Elements of art, such as painted scenery and stagecraft such as lighting are used to enhance the physicality, presence and immediacy of the experience. The specific place of the performance is also named by the word "theatre" as derived from the Ancient Greek θέατρον (théatron, "a place for viewing"), itself from θεάομαι (theáomai, "to see", "to watch", "to observe").
Modern Western theatre comes, in large measure, from ancient Greek drama, from which it borrows technical terminology, classification into genres, and many of its themes, stock characters, and plot elements. Theatre artist Patrice Pavis defines theatricality, theatrical language, stage writing and the specificity of theatre as synonymous expressions that differentiate theatre from the other performing arts, literature and the arts in general.
The city-state of Athens is where western theatre originated. It was part of a broader culture of theatricality and performance in classical Greece that included festivals, religious rituals, politics, law, athletics and gymnastics, music, poetry, weddings, funerals, and symposia.
Participation in the city-state's many festivals—and mandatory attendance at the City Dionysia as an audience member (or even as a participant in the theatrical productions) in particular—was an important part of citizenship. Civic participation also involved the evaluation of the rhetoric of orators evidenced in performances in the law-court or political assembly, both of which were understood as analogous to the theatre and increasingly came to absorb its dramatic vocabulary. The Greeks also developed the concepts of dramatic criticism and theatre architecture. Actors were either amateur or at best semi-professional. The theatre of ancient Greece consisted of three types of drama: tragedy, comedy, and the satyr play.
The origins of theatre in ancient Greece, according to Aristotle (384–322 BCE), the first theoretician of theatre, are to be found in the festivals that honoured Dionysus. The performances were given in semi-circular auditoria cut into hillsides, capable of seating 10,000–20,000 people. The stage consisted of a dancing floor (orchestra), dressing room and scene-building area (skene). Since the words were the most important part, good acoustics and clear delivery were paramount. The actors (always men) wore masks appropriate to the characters they represented, and each might play several parts.
Athenian tragedy—the oldest surviving form of tragedy—is a type of dance-drama that formed an important part of the theatrical culture of the city-state. Having emerged sometime during the 6th century BCE, it flowered during the 5th century BCE (from the end of which it began to spread throughout the Greek world), and continued to be popular until the beginning of the Hellenistic period.
No tragedies from the 6th century BCE and only 32 of the more than a thousand that were performed in during the 5th century BCE have survived. We have complete texts extant by Aeschylus, Sophocles, and Euripides. The origins of tragedy remain obscure, though by the 5th century BCE it was institutionalised in competitions (agon) held as part of festivities celebrating Dionysus (the god of wine and fertility). As contestants in the City Dionysia's competition (the most prestigious of the festivals to stage drama) playwrights were required to present a tetralogy of plays (though the individual works were not necessarily connected by story or theme), which usually consisted of three tragedies and one satyr play. The performance of tragedies at the City Dionysia may have begun as early as 534 BCE; official records (didaskaliai) begin from 501 BCE, when the satyr play was introduced.
Most Athenian tragedies dramatise events from Greek mythology, though The Persians—which stages the Persian response to news of their military defeat at the Battle of Salamis in 480 BCE—is the notable exception in the surviving drama. When Aeschylus won first prize for it at the City Dionysia in 472 BCE, he had been writing tragedies for more than 25 years, yet its tragic treatment of recent history is the earliest example of drama to survive. More than 130 years later, the philosopher Aristotle analysed 5th-century Athenian tragedy in the oldest surviving work of dramatic theory—his Poetics (c. 335 BCE).
Athenian comedy is conventionally divided into three periods, "Old Comedy", "Middle Comedy", and "New Comedy". Old Comedy survives today largely in the form of the eleven surviving plays of Aristophanes, while Middle Comedy is largely lost (preserved only in relatively short fragments in authors such as Athenaeus of Naucratis). New Comedy is known primarily from the substantial papyrus fragments of Menander. Aristotle defined comedy as a representation of laughable people that involves some kind of blunder or ugliness that does not cause pain or disaster.
In addition to the categories of comedy and tragedy at the City Dionysia, the festival also included the Satyr Play. Finding its origins in rural, agricultural rituals dedicated to Dionysus, the satyr play eventually found its way to Athens in its most well-known form. Satyr's themselves were tied to the god Dionysus as his loyal woodland companions, often engaging in drunken revelry and mischief at his side. The satyr play itself was classified as tragicomedy, erring on the side of the more modern burlesque traditions of the early twentieth century. The plotlines of the plays were typically concerned with the dealings of the pantheon of Gods and their involvement in human affairs, backed by the chorus of Satyrs. However, according to Webster, satyr actors did not always perform typical satyr actions and would break from the acting traditions assigned to the character type of a mythical forest creature.
Western theatre developed and expanded considerably under the Romans. The Roman historian Livy wrote that the Romans first experienced theatre in the 4th century BCE, with a performance by Etruscan actors. Beacham argues that they had been familiar with "pre-theatrical practices" for some time before that recorded contact. The theatre of ancient Rome was a thriving and diverse art form, ranging from festival performances of street theatre, nude dancing, and acrobatics, to the staging of Plautus's broadly appealing situation comedies, to the high-style, verbally elaborate tragedies of Seneca. Although Rome had a native tradition of performance, the Hellenization of Roman culture in the 3rd century BCE had a profound and energizing effect on Roman theatre and encouraged the development of Latin literature of the highest quality for the stage. The only surviving Roman tragedies, indeed the only plays of any kind from the Roman Empire, are ten dramas- nine of them pallilara- attributed to Lucuis Annaeus Seneca (4 b.c.-65 a.d.), the Corduba-born Stoic philosopher and tutor of Nero.
The earliest-surviving fragments of Sanskrit drama date from the 1st century AD. The wealth of archeological evidence from earlier periods offers no indication of the existence of a tradition of theatre. The ancient Vedas (hymns from between 1500 and 1000 BC that are among the earliest examples of literature in the world) contain no hint of it (although a small number are composed in a form of dialogue) and the rituals of the Vedic period do not appear to have developed into theatre. The Mahābhāṣya by Patañjali contains the earliest reference to what may have been the seeds of Sanskrit drama. This treatise on grammar from 140 BC provides a feasible date for the beginnings of theatre in India.
The major source of evidence for Sanskrit theatre is A Treatise on Theatre (Nātyaśāstra), a compendium whose date of composition is uncertain (estimates range from 200 BC to 200 AD) and whose authorship is attributed to Bharata Muni. The Treatise is the most complete work of dramaturgy in the ancient world. It addresses acting, dance, music, dramatic construction, architecture, costuming, make-up, props, the organisation of companies, the audience, competitions, and offers a mythological account of the origin of theatre. In doing so, it provides indications about the nature of actual theatrical practices. Sanskrit theatre was performed on sacred ground by priests who had been trained in the necessary skills (dance, music, and recitation) in a [hereditary process]. Its aim was both to educate and to entertain.
Under the patronage of royal courts, performers belonged to professional companies that were directed by a stage manager (sutradhara), who may also have acted. This task was thought of as being analogous to that of a puppeteer—the literal meaning of "sutradhara" is "holder of the strings or threads". The performers were trained rigorously in vocal and physical technique. There were no prohibitions against female performers; companies were all-male, all-female, and of mixed gender. Certain sentiments were considered inappropriate for men to enact, however, and were thought better suited to women. Some performers played characters their own age, while others played ages different from their own (whether younger or older). Of all the elements of theatre, the Treatise gives most attention to acting (abhinaya), which consists of two styles: realistic (lokadharmi) and conventional (natyadharmi), though the major focus is on the latter.
Its drama is regarded as the highest achievement of Sanskrit literature. It utilised stock characters, such as the hero (nayaka), heroine (nayika), or clown (vidusaka). Actors may have specialised in a particular type. Kālidāsa in the 1st century BCE, is arguably considered to be ancient India's greatest Sanskrit dramatist. Three famous romantic plays written by Kālidāsa are the Mālavikāgnimitram (Mālavikā and Agnimitra), Vikramuurvashiiya (Pertaining to Vikrama and Urvashi), and Abhijñānaśākuntala (The Recognition of Shakuntala). The last was inspired by a story in the Mahabharata and is the most famous. It was the first to be translated into English and German. Śakuntalā (in English translation) influenced Goethe's Faust (1808–1832).
The next great Indian dramatist was Bhavabhuti (c. 7th century AD). He is said to have written the following three plays: Malati-Madhava, Mahaviracharita and Uttar Ramacharita. Among these three, the last two cover between them the entire epic of Ramayana. The powerful Indian emperor Harsha (606–648) is credited with having written three plays: the comedy Ratnavali, Priyadarsika, and the Buddhist drama Nagananda.
There are references to theatrical entertainments in China as early as the Shang Dynasty; they often involved happiness, mimes, and acrobatic displays.The Tang Dynasty is sometimes known as "The Age of 1000 Entertainments". During this era, Ming Huang formed an acting school known as The Pear Garden to produce a form of drama that was primarily musical. That is why actors are commonly called "Children of the Pear Garden." During the Dynasty of Empress Ling, shadow puppetry first emerged as a recognized form of theatre in China. There were two distinct forms of shadow puppetry, Pekingese (northern) and Cantonese (southern). The two styles were differentiated by the method of making the puppets and the positioning of the rods on the puppets, as opposed to the type of play performed by the puppets. Both styles generally performed plays depicting great adventure and fantasy, rarely was this very stylized form of theatre used for political propaganda.
Cantonese shadow puppets were the larger of the two. They were built using thick leather which created more substantial shadows. Symbolic color was also very prevalent; a black face represented honesty, a red one bravery. The rods used to control Cantonese puppets were attached perpendicular to the puppets’ heads. Thus, they were not seen by the audience when the shadow was created. Pekingese puppets were more delicate and smaller. They were created out of thin, translucent leather (usually taken from the belly of a donkey).They were painted with vibrant paints, thus they cast a very colorful shadow. The thin rods which controlled their movements were attached to a leather collar at the neck of the puppet. The rods ran parallel to the bodies of the puppet then turned at a ninety degree angle to connect to the neck. While these rods were visible when the shadow was cast, they laid outside the shadow of the puppet; thus they did not interfere with the appearance of the figure. The rods attached at the necks to facilitate the use of multiple heads with one body. When the heads were not being used, they were stored in a muslin book or fabric lined box. The heads were always removed at night. This was in keeping with the old superstition that if left intact, the puppets would come to life at night. Some puppeteers went so far as to store the heads in one book and the bodies in another, to further reduce the possibility of reanimating puppets. Shadow puppetry is said to have reached its highest point of artistic development in the eleventh century before becoming a tool of the government.
In the Song Dynasty, there were many popular plays involving acrobatics and music. These developed in the Yuan Dynasty into a more sophisticated form known as zaju, with a four- or five-act structure. Yuan drama spread across China and diversified into numerous regional forms, the best known of which is Beijing Opera, which is still popular today.
Xiangsheng is a certain traditional Chinese comedic performance in the forms of monologue or dialogue.
Theatre took on many alternate forms in the West between the 15th and 19th centuries, including commedia dell'arte and melodrama. The general trend was away from the poetic drama of the Greeks and the Renaissance and toward a more naturalistic prose style of dialogue, especially following the Industrial Revolution.
Theatre took a big pause during 1642 and 1660 in England because of the Puritan Interregnum. Theatre was seen as something sinful and the Puritans tried very hard to drive it out of their society. This stagnant period ended once Charles II came back to the throne in 1660 in the Restoration. Theatre (among other arts) exploded, with influence from French culture, since Charles had been exiled in France in the years previous to his reign.
One of the big changes was the new theatre house. Instead of the type of the Elizabethan era, such as the Globe Theatre, round with no place for the actors to really prep for the next act and with no "theatre manners,” the theatre house became transformed into a place of refinement, with a stage in front and stadium seating facing it. Since seating was no longer all the way around the stage, it became prioritized – some seats were obviously better than others. The king would have the best seat in the house: the very middle of the theatre, which got the widest view of the stage as well as the best way to see the point of view and vanishing point that the stage was constructed around. Philippe Jacques de Loutherbourg was one of the most influential set designers of the time because of his use of floor space and scenery.
Because of the turmoil before this time, there was still some controversy about what should and should not be put on the stage. Jeremy Collier, a preacher, was one of the heads in this movement through his piece A Short View of the Immorality and Profaneness of the English Stage. The beliefs in this paper were mainly held by non-theatre goers and the remainder of the Puritans and very religious of the time. The main question was if seeing something immoral on stage affects behavior in the lives of those who watch it, a controversy that is still playing out today.
The seventeenth century had also introduced women to the stage, which was considered inappropriate earlier. These women were regarded as celebrities (also a newer concept, thanks to ideas on individualism that arose in the wake of Renaissance Humanism), but on the other hand, it was still very new and revolutionary that they were on the stage, and some said they were unladylike, and looked down on them. Charles II did not like young men playing the parts of young women, so he asked that women play their own parts. Because women were allowed on the stage, playwrights had more leeway with plot twists, like women dressing as men, and having narrow escapes from morally sticky situations as forms of comedy.
Comedies were full of the young and very much in vogue, with the storyline following their love lives: commonly a young roguish hero professing his love to the chaste and free minded heroine near the end of the play, much like Sheridan's The School for Scandal. Many of the comedies were fashioned after the French tradition, mainly Molière, again hailing back to the French influence brought back by the King and the Royals after their exile. Molière was one of the top comedic playwrights of the time, revolutionizing the way comedy was written and performed by combining Italian commedia dell'arte and neoclassical French comedy to create some of the longest lasting and most influential satiric comedies. Tragedies were similarly victorious in their sense of righting political power, especially poignant because of the recent Restoration of the Crown. They were also imitations of French tragedy, although the French had a larger distinction between comedy and tragedy, whereas the English fudged the lines occasionally and put some comedic parts in their tragedies. Common forms of non-comedic plays were sentimental comedies as well as something that would later be called tragédie bourgeoise, or domestic tragedy – that is, the tragedy of common life – were more popular in England because they appealed more to English sensibilities.
While theatre troupes were formerly often travelling, the idea of the national theatre gained support in the 18th century, inspired by Ludvig Holberg. The major promoter of the idea of the national theatre in Germany, and also of the Sturm und Drang poets, was Abel Seyler, the owner of the Hamburgische Entreprise and the Seyler Theatre Company.
Through the 19th century, the popular theatrical forms of Romanticism, melodrama, Victorian burlesque and the well-made plays of Scribe and Sardou gave way to the problem plays of Naturalism and Realism; the farces of Feydeau; Wagner's operatic Gesamtkunstwerk; musical theatre (including Gilbert and Sullivan's operas); F. C. Burnand's, W. S. Gilbert's and Oscar Wilde's drawing-room comedies; Symbolism; proto-Expressionism in the late works of August Strindberg and Henrik Ibsen; and Edwardian musical comedy.
These trends continued through the 20th century in the realism of Stanislavski and Lee Strasberg, the political theatre of Erwin Piscator and Bertolt Brecht, the so-called Theatre of the Absurd of Samuel Beckett and Eugène Ionesco, American and British musicals, the collective creations of companies of actors and directors such as Joan Littlewood's Theatre Workshop, experimental and postmodern theatre of Robert Wilson and Robert Lepage, the postcolonial theatre of August Wilson or Tomson Highway, and Augusto Boal's Theatre of the Oppressed.
The first form of Indian theatre was the Sanskrit theatre. It began after the development of Greek and Roman theatre and before the development of theatre in other parts of Asia. It emerged sometime between the 2nd century BCE and the 1st century CE and flourished between the 1st century CE and the 10th, which was a period of relative peace in the history of India during which hundreds of plays were written. Japanese forms of Kabuki, Nō, and Kyōgen developed in the 17th century CE. Theatre in the medieval Islamic world included puppet theatre (which included hand puppets, shadow plays and marionette productions) and live passion plays known as ta'ziya, where actors re-enact episodes from Muslim history. In particular, Shia Islamic plays revolved around the shaheed (martyrdom) of Ali's sons Hasan ibn Ali and Husayn ibn Ali. Secular plays were known as akhraja, recorded in medieval adab literature, though they were less common than puppetry and ta'ziya theatre.
Drama is the specific mode of fiction represented in performance. The term comes from a Greek word meaning "action", which is derived from the verb δράω, dráō, "to do" or "to act". The enactment of drama in theatre, performed by actors on a stage before an audience, presupposes collaborative modes of production and a collective form of reception. The structure of dramatic texts, unlike other forms of literature, is directly influenced by this collaborative production and collective reception. The early modern tragedy Hamlet (1601) by Shakespeare and the classical Athenian tragedy Oedipus Rex (c. 429 BCE) by Sophocles are among the masterpieces of the art of drama. A modern example is Long Day's Journey into Night by Eugene O'Neill (1956).
Considered as a genre of poetry in general, the dramatic mode has been contrasted with the epic and the lyrical modes ever since Aristotle's Poetics (c. 335 BCE)—the earliest work of dramatic theory. The use of "drama" in the narrow sense to designate a specific type of play dates from the 19th century. Drama in this sense refers to a play that is neither a comedy nor a tragedy—for example, Zola's Thérèse Raquin (1873) or Chekhov's Ivanov (1887). In Ancient Greece however, the word drama encompassed all theatrical plays, tragic, comic, or anything in between.
Drama is often combined with music and dance: the drama in opera is generally sung throughout; musicals generally include both spoken dialogue and songs; and some forms of drama have incidental music or musical accompaniment underscoring the dialogue (melodrama and Japanese Nō, for example). In certain periods of history (the ancient Roman and modern Romantic) some dramas have been written to be read rather than performed. In improvisation, the drama does not pre-exist the moment of performance; performers devise a dramatic script spontaneously before an audience.
Music and theatre have had a close relationship since ancient times—Athenian tragedy, for example, was a form of dance-drama that employed a chorus whose parts were sung (to the accompaniment of an aulos—an instrument comparable to the modern clarinet), as were some of the actors' responses and their 'solo songs' (monodies). Modern musical theatre is a form of theatre that also combines music, spoken dialogue, and dance. It emerged from comic opera (especially Gilbert and Sullivan), variety, vaudeville, and music hall genres of the late 19th and early 20th century. After the Edwardian musical comedy that began in the 1890s, the Princess Theatre musicals of the early 20th century, and comedies in the 1920s and 1930s (such as the works of Rodgers and Hammerstein), with Oklahoma! (1943), musicals moved in a more dramatic direction. Famous musicals over the subsequent decades included My Fair Lady (1956), West Side Story (1957), The Fantasticks (1960), Hair (1967), A Chorus Line (1975), Les Misérables (1980), Into the Woods (1986), and The Phantom of the Opera (1986), as well as more contemporary hits including Rent (1994), The Lion King (1997), Wicked (2003), and Hamilton (musical) (2015).
Musical theatre may be produced on an intimate scale Off-Broadway, in regional theatres, and elsewhere, but it often includes spectacle. For instance, Broadway and West End musicals often include lavish costumes and sets supported by multimillion-dollar budgets.
Theatre productions that use humour as a vehicle to tell a story qualify as comedies. This may include a modern farce such as Boeing Boeing or a classical play such as As You Like It. Theatre expressing bleak, controversial or taboo subject matter in a deliberately humorous way is referred to as black comedy. Black Comedy can have several genres like slapstick humour, dark and sarcastic comedy.
Tragedy, then, is an imitation of an action that is serious, complete, and of a certain magnitude: in language embellished with each kind of artistic ornament, the several kinds being found in separate parts of the play; in the form of action, not of narrative; through pity and fear effecting the proper purgation of these emotions.
Aristotle's phrase "several kinds being found in separate parts of the play" is a reference to the structural origins of drama. In it the spoken parts were written in the Attic dialect whereas the choral (recited or sung) ones in the Doric dialect, these discrepancies reflecting the differing religious origins and poetic metres of the parts that were fused into a new entity, the theatrical drama.
Tragedy refers to a specific tradition of drama that has played a unique and important role historically in the self-definition of Western civilisation. That tradition has been multiple and discontinuous, yet the term has often been used to invoke a powerful effect of cultural identity and historical continuity—"the Greeks and the Elizabethans, in one cultural form; Hellenes and Christians, in a common activity," as Raymond Williams puts it. From its obscure origins in the theatres of Athens 2,500 years ago, from which there survives only a fraction of the work of Aeschylus, Sophocles and Euripides, through its singular articulations in the works of Shakespeare, Lope de Vega, Racine, and Schiller, to the more recent naturalistic tragedy of Strindberg, Beckett's modernist meditations on death, loss and suffering, and Müller's postmodernist reworkings of the tragic canon, tragedy has remained an important site of cultural experimentation, negotiation, struggle, and change. In the wake of Aristotle's Poetics (335 BCE), tragedy has been used to make genre distinctions, whether at the scale of poetry in general (where the tragic divides against epic and lyric) or at the scale of the drama (where tragedy is opposed to comedy). In the modern era, tragedy has also been defined against drama, melodrama, the tragicomic, and epic theatre.
Improvisation has been a consistent feature of theatre, with the Commedia dell'arte in the sixteenth century being recognised as the first improvisation form. Popularized by Nobel Prize Winner Dario Fo and troupes such as the Upright Citizens Brigade improvisational theatre continues to evolve with many different streams and philosophies. Keith Johnstone and Viola Spolin are recognized as the first teachers of improvisation in modern times, with Johnstone exploring improvisation as an alternative to scripted theatre and Spolin and her successors exploring improvisation principally as a tool for developing dramatic work or skills or as a form for situational comedy. Spolin also became interested in how the process of learning improvisation was applicable to the development of human potential. Spolin's son, Paul Sills popularized improvisational theatre as a theatrical art form when he founded. as its first director, the Second City in Chicago.
Having been an important part of human culture for more than 2,500 years, theatre has evolved a wide range of different theories and practices. Some are related to political or spiritual ideologies, while others are based purely on "artistic" concerns. Some processes focus on a story, some on theatre as event, and some on theatre as catalyst for social change. The classical Greek philosopher Aristotle, in his seminal treatise, Poetics (c. 335 BCE) is the earliest-surviving example and its arguments have influenced theories of theatre ever since. In it, he offers an account of what he calls "poetry" (a term which in Greek literally means "making" and in this context includes drama—comedy, tragedy, and the satyr play—as well as lyric poetry, epic poetry, and the dithyramb). He examines its "first principles" and identifies its genres and basic elements; his analysis of tragedy constitutes the core of the discussion. He argues that tragedy consists of six qualitative parts, which are (in order of importance) mythos or "plot", ethos or "character", dianoia or "thought", lexis or "diction", melos or "song", and opsis or "spectacle". "Although Aristotle's Poetics is universally acknowledged in the Western critical tradition," Marvin Carlson explains, "almost every detail about his seminal work has aroused divergent opinions." Important theatre practitioners of the 20th century include Konstantin Stanislavski, Vsevolod Meyerhold, Jacques Copeau, Edward Gordon Craig, Bertolt Brecht, Antonin Artaud, Joan Littlewood, Peter Brook, Jerzy Grotowski, Augusto Boal, Eugenio Barba, Dario Fo, Viola Spolin, Keith Johnstone and Robert Wilson (director).
Stanislavski treated the theatre as an art-form that is autonomous from literature and one in which the playwright's contribution should be respected as that of only one of an ensemble of creative artists. His innovative contribution to modern acting theory has remained at the core of mainstream western performance training for much of the last century. That many of the precepts of his system of actor training seem to be common sense and self-evident testifies to its hegemonic success. Actors frequently employ his basic concepts without knowing they do so. Thanks to its promotion and elaboration by acting teachers who were former students and the many translations of his theoretical writings, Stanislavski's 'system' acquired an unprecedented ability to cross cultural boundaries and developed an international reach, dominating debates about acting in Europe and the United States. Many actors routinely equate his 'system' with the North American Method, although the latter's exclusively psychological techniques contrast sharply with Stanislavski's multivariant, holistic and psychophysical approach, which explores character and action both from the 'inside out' and the 'outside in' and treats the actor's mind and body as parts of a continuum.
Theatre presupposes collaborative modes of production and a collective form of reception. The structure of dramatic texts, unlike other forms of literature, is directly influenced by this collaborative production and collective reception. The production of plays usually involves contributions from a playwright, director, a cast of actors, and a technical production team that includes a scenic or set designer, lighting designer, costume designer, sound designer, stage manager, production manager and technical director. Depending on the production, this team may also include a composer, dramaturg, video designer or fight director.
Stagecraft is a generic term referring to the technical aspects of theatrical, film, and video production. It includes, but is not limited to, constructing and rigging scenery, hanging and focusing of lighting, design and procurement of costumes, makeup, procurement of props, stage management, and recording and mixing of sound. Stagecraft is distinct from the wider umbrella term of scenography. Considered a technical rather than an artistic field, it relates primarily to the practical implementation of a designer's artistic vision.
In its most basic form, stagecraft is managed by a single person (often the stage manager of a smaller production) who arranges all scenery, costumes, lighting, and sound, and organizes the cast. At a more professional level, for example in modern Broadway houses, stagecraft is managed by hundreds of skilled carpenters, painters, electricians, stagehands, stitchers, wigmakers, and the like. This modern form of stagecraft is highly technical and specialized: it comprises many sub-disciplines and a vast trove of history and tradition. The majority of stagecraft lies between these two extremes. Regional theatres and larger community theatres will generally have a technical director and a complement of designers, each of whom has a direct hand in their respective designs.
There are many modern theatre movements which go about producing theatre in a variety of ways. Theatrical enterprises vary enormously in sophistication and purpose. People who are involved vary from novices and hobbyists (in community theatre) to professionals (in Broadway and similar productions). Theatre can be performed with a shoestring budget or on a grand scale with multimillion-dollar budgets. This diversity manifests in the abundance of theatre sub-categories, which include:
While most modern theatre companies rehearse one piece of theatre at a time, perform that piece for a set "run", retire the piece, and begin rehearsing a new show, repertory companies rehearse multiple shows at one time. These companies are able to perform these various pieces upon request and often perform works for years before retiring them. Most dance companies operate on this repertory system. The Royal National Theatre in London performs on a repertory system.
Repertory theatre generally involves a group of similarly accomplished actors, and relies more on the reputation of the group than on an individual star actor. It also typically relies less on strict control by a director and less on adherence to theatrical conventions, since actors who have worked together in multiple productions can respond to each other without relying as much on convention or external direction.
In order to put on a piece of theatre, both a theatre company and a theatre venue are needed. When a theatre company is the sole company in residence at a theatre venue, this theatre (and its corresponding theatre company) are called a resident theatre or a producing theatre, because the venue produces its own work. Other theatre companies, as well as dance companies, who do not have their own theatre venue, perform at rental theatres or at presenting theatres. Both rental and presenting theatres have no full-time resident companies. They do, however, sometimes have one or more part-time resident companies, in addition to other independent partner companies who arrange to use the space when available. A rental theatre allows the independent companies to seek out the space, while a presenting theatre seeks out the independent companies to support their work by presenting them on their stage.
Some performance groups perform in non-theatrical spaces. Such performances can take place outside or inside, in a non-traditional performance space, and include street theatre, and site-specific theatre. Non-traditional venues can be used to create more immersive or meaningful environments for audiences. They can sometimes be modified more heavily than traditional theatre venues, or can accommodate different kinds of equipment, lighting and sets.
A touring company is an independent theatre or dance company that travels, often internationally, being presented at a different theatre in each city.
There are many theatre unions including: Actors' Equity Association (for actors and stage managers), the Stage Directors and Choreographers Society (SDC), and the International Alliance of Theatrical Stage Employees (IATSE, for designers and technicians). Many theatres require that their staff be members of these organizations.
|Wikiversity has learning resources about Collaborative play writing|
|Wikibooks has a book on the topic of: History of Western Theatre: Greeks to Elizabethans|
|Wikibooks has a book on the topic of: History of Western Theatre: 17th Century to Now|
|Wikimedia Commons has media related to Theatre.|
|Library resources about |
Yoshiwara was a famous yūkaku (pleasure district) in Edo, the precursor of present-day Tokyo, Japan. To confine and regulate prostitution in Japan in the early 17th century, it was restricted to designated city districts in Kyoto, Osaka and Edo in an attempt by the Tokugawa shogunate to prevent the nouveau riche chōnin (townsmen) from engaging in political intrigue. The Yoshiwara was created in the city of Edo in 1617, near what is today known as Nihonbashi. In 1656, due to the need for space as the city grew, the government decided to relocate Yoshiwara and plans were made to move the district to its present location north of Asakusa on the outskirts of the city.
The Yoshiwara was home to some 1,750 to 3,000 women during the 18th century. The area had over 9,000 women in 1893, many of whom suffered from syphilis. Girls were typically sent there by their parents between the ages of seven to twelve. When a girl was old enough and had completed her apprenticeship, she would become a courtesan and work her way up the ranks. Social classes were not strictly divided in Yoshiwara.
A commoner with enough money would be served as an equal to a samurai. Yoshiwara became a strong commercial area. The fashions in the town changed frequently, creating a great demand for merchants and artisans. Traditionally the prostitutes were supposed to wear only simple blue robes, but this was rarely enforced. The high-ranking ladies often dressed in the highest fashion of the time, with brightly colored silk kimonos and expensive, elaborate hair decorations. Fashion was so important in Yoshiwara that it frequently dictated the fashion trends for the rest of Japan. Yoshiwara remained in business until prostitution was made illegal in 1958. |
First we have to define enzymes. Enzymes are protein catalysts with specificity for both reaction catalyzed and its substrates. They can speed up cellular reactions and lower the energy of activation (which is necessary to start a reaction).
There are different ways to classify an enzyme depending on the reaction catalyzed: Oxidoreductase, transferase, hydrolase, lyase, isomerase and ligase. Ligase, as we know join two molecules using ATP (i.e. think of joining DNA strands). |
Do you know the benefits of connecting children with nature? Most of us spent time in nature as children, climbing trees, investigating rocks and bugs, or playing outside with friends. We had no idea that our enjoyable activities strengthened our minds, bodies, and personalities for the future. A recent study found that nature exposure has various long-term effects. Talk about nature for children and how to help nature.
What are the benefits of connecting children with nature?
We can mention the benefits of connecting children with nature: Physical, emotional, and cognitive growth. Children playing in nature can benefit from nature ideas for children and the ever-changing and free-flowing environment that they encounter when they are outside and in contact with nature.
Regarding education, the natural world is a vast open-ended laboratory. Children are intuitive scientists who like exploring the natural world through their senses. Children learn by experimenting with concepts as they interact with natural environments. In nature for children programs, children develop curious minds by questioning, problem solving and making theories. Remember children exploring nature can experience good thinking, such as balancing on rocks or talking about hibernation. Here are some great nature games and activities for children:
- Building and digging in mud
- Worm hunts
- Gazing at clouds
- Jumping in puddles
- Listening to the birds’ sing
- Making bird nests
- Collecting seeds
- Constructing with natural materials
Emotional benefits of connecting children with nature
Let’s take a brief and quick look at the benefits of connecting children with nature. It’s a pleasant feeling to be outside. Outside, children are free to run around, make noise, and explore the world around them. Physical activities in nature, help children to relax and also benefits their Personal, Social and Emotional Development. Children develop an Understanding of the World, where they show care and concern for animals and nature.
When children are playing outside, they may meet new people and make new friends. Playing individually or with others, helps children to learn share taking and problem solving in nature. They can think independently and solve various problems when exploring outside. Playing outdoors is calming and helps children’s social interaction skills and communication and language. Remember, nature activities for young children are very beneficial. In our Indoor Forest in nursery, the children explore nature.
Physical benefits of connecting children with nature
What are the physical benefits of connecting children with nature? Children benefit physically from being outdoor, as there is more space to run around, have fun and play bigger physical games; which increases their gross motor skill development. One of the many advantages of exposing children to the sun and fresh air, is a healthy immune system. In addition to being much more physically active, children can burn more calories and improve their overall health. The following are outdoor nature activities for children:
- Climbing trees
- Playing catch
- Balancing games
- Jumping in puddles
- Nature Race
It’s not just the individual benefits that nature provides; it’s also a shared benefit for everyone. There is a universality in the shared experiences of children worldwide who play outside. Nature projects for toddlers will teach children how to care for nature, animals and the world around them.
Now you are fully aware of the benefits of connecting children with nature. In our nursery in Jumeirah, they learn about nature and outdoor activities. A child’s confidence builds throughout time as they learn to trust their intuition in the natural environment. We provide the necessary nature information for kids in natural environment for kids that we have in our Nature school Dubai.
Follow us on The Little Dreamers Nursery Instagram page. |
Atrial fibrillation (AFib) is one of the most common heart arrhythmias (irregular or abnormal beating). It happens when your heart’s upper chambers (the atria) beat abnormally fast and out of rhythm. It can lead to severe complications like blood clots, stroke, and heart failure. So you must know how AFib affects your body and what to do about it.
The Heart Is a Smart Pump
The heart is a muscular organ that’s divided into four chambers. The tricuspid valve separates the right atrium and ventricle, and the mitral valve separates the left atrium and ventricle. The heart can’t just pump blood around your body; it also has to know what’s going on and how to respond. The heart can do this because of the electrical system within its walls.
The heart’s electrical system is made up of two different types of cells: pacemaker cells and conducting cells. The pacemaker cells are found in the upper right chamber. They generate an electrical impulse that travels through the conducting system to stimulate the contraction of your ventricles (right and left).
Beating of a Heart
A heartbeat is caused by electrical signals traveling through the heart chambers and valves to tell them how hard to contract at what time. The sinoatrial node (SA node) sends these signals out; it’s located at the junction of your right atrium and superior vena cava. The SA node acts as the central command for your cardiac rhythm. It keeps track of all your electrical activity and coordinates how to pump blood efficiently throughout the body.
What Happens in Atrial Fibrillation
Atrial fibrillation is a heart rhythm disorder where abnormal electrical signals from an overactive nerve cause the atria to contract too quickly and chaotically, which causes blood to pool in the upper chambers of your heart instead of flowing smoothly into the ventricles.
It can lead to symptoms like:
- Palpitations (heart pounding)
- Shortness of breath
Types of Atrial Fibrillation
Paroxysmal atrial fibrillation (AF), the most common type, is a temporary condition that can last a few minutes or several days. When you have paroxysmal AF, you may experience symptoms such as heart palpitations and shortness of breath. Some people also experience chest pain or lightheadedness in addition to the above symptoms.
Persistent atrial fibrillation is an ongoing condition with abnormal heart rhythms that last longer than three months. But it’s not considered chronic AFib if you don’t experience any symptoms from them.
Permanent atrial fibrillation is another type of long-term irregular heartbeat that occurs when all the cells in your heart’s upper chambers develop arrhythmias (irregular electrical signals) for an extended period.
Atrial fibrillation is a common heart condition that can be serious. It’s important to know the symptoms of atrial fibrillation and how you can manage it. If you think you have atrial fibrillation, contact your doctor right away so they can help diagnose and treat the condition before it gets worse! |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
What happens to the particles in a star when it collapses to form a neutron star. I have read that a thimble of matter from a neutron star can weigh thousands of tons. This fact is really hard to wrap your brain around. Does it mean that all the neutrons in the atoms are compressed together. If so what happens to the protons and electrons?
Neutron stars are a fascinating testbed for all sorts of extreme physics and studying the details of their interior is still an active area of research. The reason I mention that is simply to say that what happens to the protons and electrons is complicated. The short answer they turn into neutrons.
Here's the slightly longer answer. Neutrons in atomic nuclei are very stable, but free neutrons outside a nucleus will decay in a proton and electron (and technically a neutrino) in about 15 minutes through beta decay. In other words neutrons = electrons + protons. The reason normal matter isn't comprised entirely of neutrons is electron degeneracy pressure. If you've ever taken chemistry, you're familiar with the Pauli exclusion principle that dictates where an electron may be in the shell of an atom. The abbreviated version is two electrons can't occupy the same place, so they fill themselves up orderly in shells. If you try and squish matter really tightly, this in ability to be in the same place at the same time actually acts like a force holding the atoms together. This is called electron degeneracy pressure and is what supports a white dwarf together against gravity.
In a neutron star gravity has overcome electron degeneracy pressure allowing the protons and electrons to combine into neutrons. Now the force holding the star together against gravity is the neutron degeneracy pressure. Neutrons, like electrons, are fermions, and two neutrons may not be in the same state, and this neutron crowding provides a supportive force against the intense gravitational pressure. As I alluded to above the details are more complicated, but it's safe to stay we will likely never be able to simulate the states of matter in a neutron star on the Earth.
Edit by Michael Lam on August 29, 2015: To add an answer for the title question, there are some number of protons in the interior, which is of order 10%. The rest is thought to be mostly neutrons. The crust is made of mostly iron nuclei bathed in freely-moving electrons. When the neutron star forms, most of the protons and electrons combine together to form neutrons, as described above. |
The most chemically polluted place on Earth, according to the Guinness Book of World Records, is Dzerzhinsk — a city located on the Oka River approximately 240 miles east of Moscow. It was the prime site in the Soviet Union for producing chemical weapons from 1941 to 1965.
The core chemicals manufactured there were mustard gas (yperite), lewisite, phosgene and prussic acid.
Mustard gas, also known as sulphur mustard, is a chemical agent that causes blistering of the skin as well as damage to the eyes and respiratory tract. When used in chemical warfare, it is a yellow-brown liquid that smells something like mustard or horseradish. It was used in World War I by Germany against British soldiers near Ypres, Belgium in 1917 — hence the name yperite. Ironically, a British chemist, Hans Thacher Clarke (1887-1972), helped Germany develop the agent.
In 1913, Clarke was working with German chemist Hermann Emil Fischer (1852-1919) in Berlin to improve on previous versions of mustard gas. When Clarke dropped a flask on his hand, he suffered severe burns which caused him to be hospitalized for two months. After Fischer reported this to the German Chemical Society, development of the agent for chemical warfare began, with large-scale production commencing in 1916.
Lewisite is a colorless, odorless arsenic-based blistering agent similar to mustard gas. It is named for Winford Lee Lewis (1878—1943), an American soldier and chemist, who is generally credited with inventing it. However, it was first synthesized in 1904 by Julius Nieuwland (1878-1936), a priest and professor of chemistry at the University of Notre Dame.
Production of lewisite began in the U.S. in 1918, though it was produced too late to be used in World War I. It was considered obsolete in the U.S. in the 1950s, but it continued to be manufactured in other countries. Because it has no medicinal or manufacturing value, its use was restricted to chemical warfare.
Phosgene was first synthesized in 1812 by British chemist John Davy (1790-1868) when he exposed carbon monoxide and chlorine to sunlight. It was used in making dyes, and today is used in pesticides and in the production of polycarbonates.
French chemists developed it into an agent of chemical warfare in 1915, and it was used during World War I, sometimes in a deadly combination with chlorine. Phosgene gas was difficult to detect, and those who were exposed often didn't show symptoms for hours.
Even though mustard gas was more notorious, phosgene actually accounted for more deaths: approximately 85% of the 100,000 who died from chemical weapons died from exposure to phosgene. Mustard gas and phosgene were used by the Central Powers, the U.S. and the Allies during World War I.
Though chemical weapon use was widespread in World War I, only a small percentage of exposures were fatal; the majority of exposures were not life-threatening, but very painful and debilitating. Those who recovered enough to return to combat would do so only after several weeks. Others were literally scarred for life from chemical burns, often accompanied by respiratory and vision ailments.
Prussic acid, or hydrogen cyanide, was initially isolated from the synthetic pigment Prussian blue (hence the name) in 1752 by French chemist Pierre-Joseph Macquer (1718-1784). Cyanide in fact is derived from the word cyan, meaning blue.
Unlike phosgene, hydrogen cyanide kills quickly. It was the key ingredient used in Zyklon B in the Nazi death camps during World War II. It was also used in pesticides, though this was eventually abandoned due to its high toxicity.
The Dzerzhinsk chemical plant ceased production in 1965, but much of the chemical waste was buried onsite. Although the primary plant was shut down, several other chemical plants currently operate in the city, producing hundreds of chemical products.
A 2007 Blacksmith Institute study found that the life expectancy of a man in Dzerzhinsk is 42 years, 47 for women, and that the 2003 death rate exceeded the birth rate by 260%. The water in Dzerzhinsk is reported to be contaminated with dioxins and phenols at 17 million times the safe limit. The city's own environmental agency estimates that 300,000 tons of chemical waste was improperly disposed of between 1930 and 1998, and the Russian State Duma's Ecology Committee lists Dzerzhinsk as one of the top ten cities with disastrous ecological conditions. |
Italian language, member of the Romance group of the Italic subfamily of the Indo-European family of languages.
Historically, Italian is a daughter language of Latin. Northern Italian dialects are the Gallo-Italian—including Piedmontese, Ligurian, Lombard, and Emilian—and Venetian. Further south, the major dialects are Tuscan and various others from Umbria to Sicily. Sardinian, spoken on the island of Sardinia, is sufficiently distinct from other dialects to be considered by some a Romance language in its own right. The Rhaeto-Romance forms, similar to the dialects of northern Italy, are spoken in the border region between Italy and Switzerland.
Dante Alighieri (top) and Petrarch (bottom) were influential in establishing their Tuscan dialect as the most prominent literary language in all of Italy in the Late Middle Ages.
During the Middle Ages, the established written language in Europe was Latin, though the great majority of people were illiterate, and only a handful were well versed in the language. In the Italian peninsula, as in most of Europe, most would instead speak a local vernacular. These dialects (as they are commonly referred to) were born from Vulgar Latin over the course of centuries, evolving naturally unaffected by formal standards and teachings. They are not in any sense “dialects of” standard Italian, that itself started off being one of these local tongues, but sister languages of Italian. Mutual intelligibility with Italian varies widely, as it does with Romance languages in general. The Romance dialects of Italy can differ greatly from Italian at all levels (phonology, morphology, syntax, lexicon, pragmatics) and are classified typologically as distinct languages.
The standard Italian language has a poetic and literary origin in the writings of Tuscan writers of the 12th century, and, even though the grammar and core lexicon are basically unchanged from those used in Florence in the 13th century, the modern standard of the language was largely shaped by relatively recent events. However, Romance vernacular as language spoken in the Apennine peninsula has a longer history. In fact, the earliest surviving texts that can definitely be called vernacular (as distinct from its predecessor Vulgar Latin) are legal formulae known as the Placiti Cassinesifrom the Province of Benevento that date from 960–963, although the Veronese Riddle, probably from the 8th or early 9th century, contains a late form of Vulgar Latin that can be seen as a very early sample of a vernacular dialect of Italy.
The language that came to be thought of as Italian developed in central Tuscany and was first formalized in the early 14th century through the works of Tuscan writer Dante Alighieri, written in his native Florentine. Dante’s epic poems, known collectively as the Commedia, to which another Tuscan poet Giovanni Boccaccio later affixed the title Divina, were read throughout the peninsula and his written dialect became the “canonical standard” that all educated Italians could understand. Dante is still credited with standardizing the Italian language. In addition to the widespread exposure gained through literature, the Florentine dialect also gained prestige due to the political and cultural significance of Florence at the time and the fact that it was linguistically an intermediate between the northern and the southern Italian dialects. Thus the dialect of Florence became the basis for what would become the official language of Italy.
Italian was progressively made an official language of most of the Italian states predating unification, slowly replacing Latin, even when ruled by foreign powers (like Spain in the Kingdom of Naples, or Austria in the Kingdom of Lombardy-Venetia), even though the masses kept speaking primarily their local vernaculars. Italian was also one of the many recognised languages in the Austro-Hungarian Empire.
Italy has always had a distinctive dialect for each city because the cities, until recently, were thought of as city-states. Those dialects now have considerable variety. As Tuscan-derived Italian came to be used throughout Italy, features of local speech were naturally adopted, producing various versions of regional Italian. The most characteristic differences, for instance, between Roman Italian and Milanese Italian are the gemination of initial consonants and the pronunciation of stressed “e”, and of “s” in some cases: e.g. va bene “all right”: is pronounced [va ˈbbːɛne] by a Roman (and by any standard Italian speaker), [va ˈbene] by a Milanese (and by any speaker whose native dialect lies to the north of the La Spezia–Rimini Line); a casa “at home” is [a ˈkkːasa] for Roman and standard, [a ˈkaza] for Milanese and generally northern.
In contrast to the Gallo-Italic linguistic panorama of northern Italy, the Italo-Dalmatian Neapolitan and its related dialects were largely unaffected by the Franco-Occitan influences introduced to Italy mainly by bards from France during the Middle Ages, but after the Norman conquest of southern Italy, Sicily became the first Italian land to adopt Occitan lyric moods (and words) in poetry. Even in the case of Northern Italian languages, however, scholars are careful not to overstate the effects of outsiders on the natural indigenous developments of the languages.
The economic might and relatively advanced development of Tuscany at the time (Late Middle Ages) gave its language weight, though Venetian remained widespread in medieval Italian commercial life, and Ligurian (or Genoese)remained in use in maritime trade alongside the Mediterranean. The increasing political and cultural relevance of Florence during the periods of the rise of the Banco Medici, Humanism, and the Renaissance made its dialect, or rather a refined version of it, a standard in the arts.
The Renaissance era, known as il Rinascimento in Italian, was seen as a time of “rebirth”, which is the literal meaning of both renaissance (from French) and rinascimento (Italian).
Pietro Bembo was an influential figure in the development of the Italian language from the Tuscan dialect, as a literary medium, codifying the language for standard modern usage.
During this time, long-existing beliefs stemming from the teachings of the Roman Catholic Church began to be understood from new a perspectives as humanists—individuals who placed emphasis on the human body and its full potential—began to shift focus from the church to human beings themselves. Humanists began forming new beliefs in various forms: social, political, and intellectual. The ideals of the Renaissance were evident throughout the Protestant Reformation, which took place simultaneously with the Renaissance. The Protestant Reformation began with Martin Luther’s rejection of the selling of indulgences by Johann Tetzel and other authorities within the Roman Catholic Church, resulting in Luther’s eventual break-off from the Roman Catholic Church in the Diet of Worms. After Luther was excommunicated from the Roman Catholic Church, he founded what was then understood to be a sect of Catholicism, later referred to as Lutheranism. Luther’s preaching in favor of faith and scripture rather than tradition led him to translate the Bible into many other languages, which would allow for people from all over Europe to read the Bible. Previously, the Bible was only written in Latin, but after the Bible was translated, it could be understood in many other languages, including Italian. The Italian language was able to spread even more with the help of Luther and the invention of the printing press by Johannes Gutenberg. The printing press facilitated the spread of Italian because it was able to rapidly produce texts, such as the Bible, and cut the costs of books which allowed for more people to have access to the translated Bible and new pieces of literature. The Roman Catholic Church was losing its control over the population, as it was not open to change, and there was an increasing number of reformers with differing beliefs.
Italian became the language used in the courts of every state in the Italian peninsula. The rediscovery of Dante’s De vulgari eloquentia and a renewed interest in linguistics in the 16th century, sparked a debate that raged throughout Italy concerning the criteria that should govern the establishment of a modern Italian literary and spoken language. This discussion, known as questione della lingua (i. e., the problem of the language), ran through the Italian culture until the end of the 19th century, often linked to the political debate on achieving a united Italian state. Renaissance scholars divided into three main factions:
• The purists, headed by Venetian Pietro Bembo (who, in his Gli Asolani, claimed the language might be based only on the great literary classics, such as Petrarch and some part of Boccaccio). The purists thought the Divine Comedy was not dignified enough because it used elements from non-lyric registers of the language.
• Niccolò Machiavelli and other Florentines preferred the version spoken by ordinary people in their own times.
• The courtiers, like Baldassare Castiglione and Gian Giorgio Trissino, insisted that each local vernacular contribute to the new standard.
Alessandro Manzoni sat the basis for the modern Italian language and helped creating linguistic unity throughout Italy.
A fourth faction claimed that the best Italian was the one that the papal court adopted, which was a mixture of the Tuscan and Roman dialects. Eventually, Bembo’s ideas prevailed, and the foundation of the Accademia della Crusca in Florence (1582–1583), the official legislative body of the Italian language led to publication of Agnolo Monosini’s Latin tome Floris italicae linguae libri novem in 1604 followed by the first Italian dictionary in 1612.
The continual advancements in technology plays a crucial role in the diffusion of languages. After the invention of the printing press in the fifteen century, the number of printing presses in Italy grew rapidly and by the year 1500 reached a total of 56, the biggest number of printing presses in all of Europe. This allowed to produce more pieces of literature at a lower cost and as the dominant language, Italian spread.
An important event that helped the diffusion of Italian was the conquest and occupation of Italy by Napoleon in the early 19th century (who was himself of Italian-Corsican descent). This conquest propelled the unification of Italy some decades after and pushed the Italian language into a lingua franca used not only among clerks, nobility, and functionaries in the Italian courts but also by the bourgeoisie.
Italian literature’s first modern novel, I promessi sposi (The Betrothed) by Alessandro Manzoni, further defined the standard by “rinsing” his Milanese “in the waters of the Arno” (Florence’s river), as he states in the preface to his 1840 edition.
After unification, a huge number of civil servants and soldiers recruited from all over the country introduced many more words and idioms from their home languages (ciao is derived from the Venetian word s-cia[v]o (“slave”), panettonecomes from the Lombard word panetton, etc.). Only 2.5% of Italy’s population could speak the Italian standardized language properly when the nation was unified in 1861. |
Welcome to Art Movement in Focus! In this section, we explore significant art movements in history through a series of articles dedicated to each movement. Here’s looking at Printmaking.
Printmaking – An Introduction
Printmaking is an art form that requires a blend of creativity and technical skill. It requires meticulous handiwork to create visually interesting work. Soon after its invention, the importance of printmaking became realized. It gave society a valuable art form that allowed images and text to be reproduced.
Prints could be distributed to people who couldn’t afford one-of-a-kind oil paintings. Additionally, printmaking allowed societies to disseminate information en masse. This included books, religious illustrations, and maps. This was a singular development in the history of art.
A Little bit of History
There are different sources that record the history of the unique medium. One source says that the oldest woodblock known was in the Han Dynasty from 206 BC to 220 AD.
However, archaeological findings confirm that the technique of duplicating images goes back thousand years to the Sumerians (c. 3000 BCE). These skilled work smiths engraved designs and cuneiform inscriptions on stone cylinder seals. When rolled over soft clay tablets, it left a relief impression.
Also, woodblock prints were profusely used as early as the eighth century in Japan to publish Buddhist scriptures. The designer and painter Tawaraya Sōtatsu (c. 1640) used wood stamps to print designs on paper and silk. On a side note, silk was the popular medium for prints till paper gradually became more popular.
With so much development in technology could the Egyptians be far behind?
They made their first woodblock prints for textiles in the 6th or 7th century. The earliest printed image with an authenticated date is a scroll of the Diamond Sutra (Buddha scripture) printed by Wang Jie in 868 CE, which was found in a cave in eastern Turkistan.
Engraving is one of the oldest art forms. They have been found on prehistoric bones, stones, and cave walls. The idea was of multiplication, also following the mechanical principle, the roller. The more sophisticated form would develop into the printing press.
So What is Print Making ?
Printmaking is an art form that involves transferring images from a matrix, or template, onto another surface, typically paper or fabric. A printmaker creates the base out of wood, metal, glass, or other material using tools. Carving away from the wood creates negative space on the print after the ink was transferred onto it. Then, using chemicals they work it onto the surface into an image. The artist then inks the template in order to stamp another surface.
Traditional printmaking methods, including woodcut, etching, engraving, and lithography, requires applying even pressure. The process allows artists to create many replicas of the same image. Throughout history, it’s served as a reasonable way to communicate and share art.
When the Chinese introduced movable type (c.1041 and c.1048) and improved on the design over the coming centuries, bookmaking became much more possible. So post the developmental stage it assumed great importance. Here is the next leg of the journey.
The History of Printmaking through the Centuries
In European prints date back to the beginning of the 15th century, where woodcut prints were used to make paper playing cards in Germany. Artist soon adapted the technique to render bold figures against blank backgrounds. Soon artists began creating more complex designs involving backgrounds and borders.
Johannes Gutenberg’s printing press revolutionized the art form and the culture in the 15th century. His most famous works, remains the 1,300-page Gutenberg Bibles. Containing masterful prints that used printed gothic type designed to look like hand calligraphy.
Subsequently in the latter half of the 16th century, skilled artisans took over the revolution. Printed maps became more popular as people began traveling frequently. Publishers would also buy plates from their original artists and print them in massive quantities. The commercialism sometimes led to ruining the original plates in the process.
This century saw the emergence of the Japanese art form ukiyo-e, marking a break from the culture’s heavily Chinese-influenced works. These refined and highly stylized woodcuts illustrated daily life. Hishikawa Moronobu was the first master of the form was, who used street scenes, peddlers and crowds as subject matter. (17th c) Then in the 18th and 19th century as technology developed it grew to a veritable form of art.
India and Printmaking
In India woodcut as an artform flourished particularly under the Mughal Empire. Its impact in the western parts of the country still remains strong. Many Indian artists would gradually take up the style and make it their own.
“Printmaking became popular in India during 1921 with Nandalal Bose introducing it to Kala Bhavan in Santiniketan. From his visit to China and Japan in 1924, he brought back Chinese rubbings and Japanese colour woodcut prints. Owing to this, the students of Kala Bhavana thus established direct contact with original prints of the Far East.” – KNN – Knowledge News Network (September 2014)
Amngst them was Benodebehari Mukherjee and Ramkinkar Baij who experimented with the medium in the 30s & 40s. Chittaprosad and Somnath Hore used linocuts and woodcuts to disseminate reformist concerns. They used it for their socio-political critique of events like the Bengal Famine of 1943 and the Tebhaga movement.
The Influence Spreads to Europe
Though known for mastery of colour, Impressionists Édouard Manet, Edgar Degas, and Camille Pissarro created distinct etchings, lithographs, and monoprints. Soon, Japanese woodcuts made their way into Western consciousness. The exoticism, simplicity, and abstractions influenced Paul Gauguin, Henri de Toulouse-Lautrec, and the American Impressionist Mary Cassatt.
Japanese artists continued to develop new printmaking techniques. The woodcut master, Hokusai, was prolific, with a body of work encompassing 35,000 drawings and prints. His series “The 36 Views of Mount Fuji” and “The Breaking Wave off Kanagawa” are still referred to reverentially.
In the 21st century artists like including Pablo Picasso, Georges Braque, Henri Matisse, and Georges Rouault, too experimented with the form. We bring this cumulative history to showcase the reach
There are three basic techniques
Relief printing: Here the background is cut down, leaving a raised image that takes the ink.
Intaglio printing: Here a metal plate is used. The selected image is either engraved into the metal with a tool known as a ‘burin’. Alternatively, the plate is coated with a waxy acid-resistant substance called ‘ground’ upon which the design is drawn with a metal needle.
Planographic: The entire surface is involved, but some areas are treated to retain the ink. The best-known example is lithography. The design is drawn onto the matrix with a greasy crayon. Ink is then applied to the whole surface, sticks only to the grease marks of the drawing.
Others: Other surface printing methods include stencil printmaking – where the image or design is cut out and then printed by spraying ink or paint through the stencil. The Planographic technique is also used for mono-typing, digital prints, screen-printing, and pochoir.
OUR TOP 6 Printmaker Artists
- Pablo Picasso
- Andy Warhol
- Roy Lichenstein
- Chittoprosad : Bengal Famine
- Suzuki Harunobu
6 WORKS THAT DEFINE PRINTMAKING AS AN ART FORM
Apart from the immense practicality of the form, there was something that drew people to printmaking. The freedom of working with the dimensions was novel. The ability to craft and mold negative spaces was unique. The Japanese style of woodcut and transference was only a starting point.
Artists from different genres as we can see have been drawn to experimenting, developing it to versatile forms. Traversing a journey of 5000 odd years, it seeped into pop culture with famous street artists incorporating these techniques in a form, like Banksy.
Tete de Femme
Marilyn Monroe (FS II.27)
Beggar Brother & Sister
Lovers Walking in the snow (Crow and Heron)
c. 1764 -72
For more such quick introductions and lists regarding Art History, visit Art Movement in Focus. |
The European 2020 package set binding targets to ensure that the EU will by 2020:
- Reduce its greenhouse gas emissions by 20 %
- Increase the use of renewable energy sources by 20 %
- Improve energy efficiency by 20 %
Heat pumps can play a considerable role in reaching those key objectives as they reduce final and primary energy demand, in particular demand of fossil sources, reduce greenhouse gas (GHG) emissions, and use renewable energy sources in form of latent energy from ground, water or air.
Heat pumps always provide heating and cooling, thus giving the same device an additional economic advantage in cases where both services are needed. In heating mode, heat pumps take stored solar heat from air, water, or ground and release it together with the input energy in the form of useable heat to the heating and hot water circuit. In cooling mode, refrigerators operate to extract heat from inside and discharge it to the outside. By moving rather than generating energy, they are environmentally friendly and extremely energy-efficient.
The 4 Main Characteristis of Heat Pumps
1. ENERGY EFFICIENCY
Heat Pump can demonstrate in the range of 120 % to 300 % efficiency. For each kW of electricity consumed, they generate about 4kW of thermal energy.
Comparison with other heating technologies:
- Condensing gas/oil boiler: 90 - 96 % efficiency
- Conventional gas/oil boiler: 70 - 80 % efficiency
- Direct electric heating: 35 - 45 % efficiency
Heat Pumps contribute to an annual reduction of 9,16 million tons CO2 emissions in the EU. About 75 % of the energy that is used is renewable, whereas 25 % of the energy is generated by other sources (in 99 % this is electricity). If the electricity is generated by renewables (PV, wind, hydro), then the heat pump is 100 % renewable and CO2-neutral. According to IEA, heat pumps could save 50 % of the building sector's CO2 emissions, and 5 % of the industrial sector's. This means that 1.8 billion tonnes of CO2 per year could be saved by heat pumps.
3. EUROPEAN BACKGROUND
The vast majority of the heat pumps installed in Europe are also manufactured in Europe. In fact, the EU heat pump companies play a leadership role in the technology development (EHPA). They foster EU employment: 40 358 Europeans are full-time working in the heat pumps sector. This is a very moderate estimation based on the sales data in Europe, on which we applied a certain factor: man-hours needed to install the different types of heat pumps. (EHPA)
4. ENERGY SECURITY
The EU imports annually energy worth over 400 billion euro. Heat pumps reduce the use of primary and final energy. So we would need less energy and by consequence less would need to be imported. This saves costs and secures the supply of energy at the same time: we become more energy independent.
The Heat Pump Cycle
The operating principle of heat pumps is based on transferring heat energy by circulating a working fluid (refrigerant) through a continuous cycle of evaporation and condensation. The refrigerant cycle, called the Carnot cycle, includes four main stages:
In a heat exchanger the liquid refrigerant absorbs energy from the heat source (water, soil or air) and turns into gas.
The refrigerant is put through a compressor run by auxiliary energy that causes the pressure of the gas to increase and its temperature to rise. The refrigerant leaves the compressor as a hot gas.
The hot gas flows into the liquefier, releases energy to the heating system, falls in temperature and returns to a liquid state. This energy is used to create hot water for both central heating and domestic hot water supply.
The hot, liquid refrigerant is transferred to the expansion valve. In the expansion valve the pressure is reduced very rapidly and thus the temperature of the refrigerant drops quickly without releasing energy. The refrigerant leaves the expansion valve as a cold liquid, ready to repeat the cycle again. |
Understanding the feeding behavior of lady beetles will help agronomists develop cropping systems that best use these important beneficial insects as biological controls of insect pests, such as aphids and Colorado potato beetles.
Agricultural Research Service entomologist Jonathan Lundgren at the North Central Agricultural Research Laboratory in Brookings, South Dakota, and former ARS entomologist Michael Seagraves were part of a team of ARS and university scientists that examined how lady beetle diets alter their feeding patterns and physiology.
Appreciated for their ability to eat insect pests, lady beetles also consume nectar, pollen, and other plant tissue. Indeed, most beneficial predators eat both prey and nonprey foods, and understanding the factors that affect what they eat is important to using them in biological control of crop pests. The foods they consume determine where and when they can be found in a farm field and whether they decide to eat crop pests.
Also, since many field crops are treated with insecticides, an important step in assessing the risk to beneficial species is to know how much insecticide these insects consume when they feed on plants.
For laboratory feeding tests, the team chose a native lady beetle species, Coleomegilla maculata. The results of the tests reveal that this lady beetle consumes two to three times more plant tissue after being fed a prey-only diet than after being fed a mixed diet of prey and plant tissue.
“This suggests that plant material is providing some key nutrients lacking in prey-only diets,” says Lundgren. “It is important to recognize that nonprey foods contain different nutrients from insect prey, and predators fed mixed diets are often more fit than those fed only prey.”
In a follow-up study, Lundgren and his colleagues looked at sugar consumption by lady beetles in the field. Sugar, whether in a sugar-syrup spray provided by the farmer or in nectar from nearby flowering plants, is an important nutrient, allowing female lady beetles to survive and produce more eggs than those denied this sweet treat. This feeding behavior is known to exist, but its effect on lady beetle physiology is less understood.
“Foods like sugar and pollen are important components of their diets, and it is thought that lady beetles rely heavily on sugar resources in the field, although no one has ever quantified their feeding,” says Lundgren. “In this study, we applied sugar sprays to soybeans and quantified the frequency of sugar feeding using gut content analysis of common agronomic lady beetles in South Dakota, Maryland, and Kentucky.”
Says Seagraves, “We found that all the lady beetles we tested regularly consumed sugar—like nectar—in soybean fields, even when it wasn’t applied as a supplement. However, the sugar-sprayed plots had more lady beetles than the untreated plots, although soybean aphid populations were similar in the two treatments. This research makes the case that sugar-feeding is very important for lady beetle populations in cropland and suggests one way to maintain these beneficial species in agroecosystems.”
The research team’s findings were reported in the journals BioControl and Biocontrol Science and Technology.—By Sharon Durham, Agricultural Research Service Information Staff.
This research is part of Crop Protection and Quarantine, an ARS national program (#304) described at www.nps.ars.usda.gov.
"Advantages of Understanding the Lady Beetle Diet" was published in the January 2013 issue of Agricultural Research magazine. |
About 10 percent of people have at least one seizure at some point in their lives. Anything that disrupts the normal electrical activity in the brain can cause a seizure. Frequent causes: high fever, low blood sugar, high blood sugar, alcohol or drug withdrawal, or a brain structural abnormality.
Epilepsy implies that the person is at risk to have unprovoked seizures. Epilepsy is a neurological condition that affects up to 1 percent of the population. Epilepsy has many possible causes including brain tumor, stroke, brain damage from illness or injury, or some combination of these. In a large group of patients, there is no detectable cause.
A person is considered to have epilepsy after:
- Two or more unprovoked seizures
- A single seizure accompanied by other findings that suggest to the neurologist there is at least a 60 percent chance of another unprovoked seizure
Your neurologist will evaluate the likelihood of another seizure through a few different tests. The most common is the Electroencephalogram (EEG), which records your brain’s electrical activity.
If you are experiencing seizures that are difficult to diagnose or control our epilepsy monitoring unit (EMU) provides continuous care to determine precise cause of seizures and the most effective treatment. |
Botanist Lynn Sweet regularly treks through California’s Joshua Tree National Park, nearly 800,000 acres that lie at the intersection of the Mojave and Colorado deserts. She likes to photograph the gnarly, spikey-limbed trees, which look—as some have observed—like a picture from a Dr. Seuss children’s book.
Much as many of the park’s million or more yearly tourists do, she marvels at their strange beauty. “They have an amazing shape,” she says. She said they don’t bloom every year, but when they do it’s very special. “This year, the plants flowered earlier than most people had ever seen. Some plants started flowering in November, and then the number of trees in flower increased until springtime, when nearly every tree was in flower. It was incredible,” she says.
The trees, legend has it, were named after the Biblical figure Joshua by 19th century Mormons who thought their upwardly outstretched limbs resembled arms raised in prayer. The trees have been around since the Pleistocene, which began more than two million years ago and concluded at the end of the last ice age. Woolly mammoths, mastodons, giant cave bears and saber-toothed tigers roamed among them. The animals are long gone, but these iconic trees still exist.
But scientists like Sweet fear they might not be here much longer if climate change continues unabated. For many Joshua trees, this century could be their last. They’ve managed to tolerate the assaults of prehistoric times, only to fall prey to industrial advances that are now heating up the planet.
“Whereas shifts in the past may have been major—such as during the Pleistocene—the current shifts are very rapid,” Sweet says. The temperatures are rising so fast, that Joshua trees are little able to migrate to cooler areas. Additionally, she says the species, “now has barriers, such as roads and development to move across.”
Sweet, a plant ecologist at the University of California Riverside’s Center for Conservation Biology, partnering with the Earthwatch Institute, enlisted volunteers to help collect data on about 4,000 trees in the park to determine whether climate change already has had an impact. She mapped out where Joshua trees live in the park to determine which conditions they do best in, and then compared that with projections of what Joshua Tree National Park will like look later this century.
“I chose climate change projections for end-of-century,” she explains. She looked at how much the climate will change if humans tackle the problem, and how much it will change if humans do nothing. “In the upper end, where we do nothing to address climate change, we may see almost no more habitat for the Joshua tree in the Park.”
Her calculations suggest that addressing climate change could save 19 percent of the trees after 2070. If nothing is done, however, the park likely only would keep a scant 0.02 percent. The study appears in the journal Ecosphere.
The work builds upon an earlier study in 2012, also by UC Riverside researchers, which found the trees would begin to vanish if temperatures rose 3 degrees Celsius. The newest study considered additional factors, such as soil moisture estimates and precipitation, among others.
The trees already have begun drifting to higher areas in the park, where they might escape the heat and have a better chance at producing younger plants, she says. In hot areas, however, tress reproduce less—and the study shows those that do are dying. Older trees, which can live as long as 300 years, can store large amounts of water, which helps them cope with drought. But younger trees lack this capacity, and are less likely to survive.
Prolonged droughts make things difficult for animals and plants that need water, prompting many species to relocate to areas that are more hospitable, often cooler, wetter and higher. “For the Joshua tree, on broad, flat areas, this is like outrunning a very wide flash flood,” Sweet says. “Over flat ground, great distances are involved to escape the threat of hotter, drier temperatures. Moving upslope to where it’s cooler is another way to escape the heat, but these areas may or may not be suitable to the root system or growth of the Joshua tree.”
The tree is also missing a key ally that previously helped it migrate to new areas. “The Joshua tree is pretty tough,” she says. “It is built to survive and persist through droughts. In the past, the species as a whole was able to migrate distances using its likely primary disperser, the Shasta ground sloth. Since this species is [now] extinct, the tree can no longer migrate great distances. This is a problem with this new, more rapid shift in climate.”
Also, the trees in the western Mojave differ from those further to the east in Utah and Nevada, facing special challenges, Sweet says. “Joshua trees have a particular pollinator and only this insect, a yucca moth, can pollinate them. The relationship is thus really special—there is benefit for both the insect and the moth in the relationship. The moth gets food for its larvae, and the Joshua tree is able to get pollen moved from tree to tree. No other insect can do this. Thus, though common, it’s really a fragile existence. If climate change affects the moth in a different way than the tree, we may be in trouble.”
The study also found that wildfires pose an additional hazard, as invasive plants and shrubs—fueled by smog and car exhaust—serve as kindling for the blazes. The scientists said that the U.S. Park Service, also a partner in the work, has been trying to reduce the danger by eliminating many of the plants.
As a park visitor, Sweet continues to find the trees inspiring. “I really enjoy watching wildlife use the trees,” she says. “I’ve spent time watching Orioles move in and out of nests on the trees. I’ve seen spiny lizards darting up and down the trunks. It’s just such an important structure in the habitat. It’s not a shy tree. It’s the most noticeable component of the [park] and the Mojave.”
But as a scientist, she believes that only aggressive climate mitigation can save them. “Changes are already occurring on the landscape in terms of where the new trees are occurring, and this supports the predictions about future changes,” she says. “We know things may get worse. The degree to which this happens depends on human action.”
Marlene Cimons writes for Nexus Media, a syndicated newswire covering climate, energy, policy, art and culture. |
Math Enjoyable for Kids Offers Them a Head Start
Many parents give their children a head start in literacy by reading to them as toddlers, however mathematics is commonly reduced to just getting the children to count. Mathematics is way more than numbers and counting. Mathematical concepts can also be instilled in a pre-schooler and it would not need to be boring; in reality it could be fun.
Listed below are three areas the place parents and early education lecturers can provide younger children a head start in mathematics:
Find math interactions in every day routines – On a regular basis activities could be chock-stuffed with math. We use math on a regular basis and don’t even know it or aren’t aware of it. We might help children develop simple math ideas by engaging them in activities that use math skills. This will be as simple as having your child find a matching pair of socks, shoes or even objects around the home. Possibly even have them assist you sort out the laundry or organise silverware in a drawer. This will teach them sorting and evaluating concepts.
Lunch or snack time could be a time of comparing who has essentially the most crackers, carrot sticks, etc. Or juice in a glass could be noticed as fractions, one-quarter full, one-half-full or three quarters-full. There are lots of things that we do mathematically and as you undergo your each day routine, you will discover more and more things that you could bring to the attention of your child and have interaction the teen which will develop their mathematical skills at an early age.
Taking part in with Math – Playtime affords loads of opportunities for a child to interact and discover mathematical concepts. For example, shapes could be made with Play-Do, Popsicle sticks or different easy building materials. Story books or songs that embrace numbers are additionally glorious and enjoyable ways to get a child to think mathematically. Songs similar to 5 Little Monkeys might be more instructional than we think.
Math is more than Counting Numbers – Spatial reasoning and awareness are also valuable in mathematics. Although much of this is acquired as a child has the liberty to discover his surroundings, it may also be an object of childhood activities. Spatial awareness has to do with understanding objects as they relate to oneself in a given space.
Once we discuss or give directives to children about an object’s location we’re making them aware of objects in relationship to their space. For instance, the ball is within the cabinet, the book is on the bookshelf, the toys are under the table are all examples of objects in location. Fun activities can embrace a game of hide-and-seek, the game Simon Says and other children’s games that embrace objects and movement in relationship to the child’s location.
These are only a few ideas of how one can give your child a head start in mathematics. As you look around, many other ideas, games and activities will come to your mind on how you can have interaction pre-schoolers so that their mathematical skills are developed. Within the lengthy-term, they will be better math students because math was taught to them in an enjoyable way.
If you liked this report and you would like to acquire more details about website kindly visit our page. |
XML consists of a hierarchy of elements. The elements can contain sub-elements, CDATA, or both. For this specification, however, an element never contains mixed content or both sub-elements and CDATA. Attributes are additional information associated with an element. The textual representation of an element is referred to as a tag. See the following example:
1. <Foo name=”bob”>Ack!</Foo>
An XML element consists of a named opening and closing tag. In the above example, <Foo...> is referred to as the opening tag and </Foo> is referred to as the closing tag. The text Ack! in between the opening and closing tags is called the CDATA. CDATA can be restricted to certain formats, patterns, or words. In the document when it refers to an element having CDATA, it indicates that the element has no sub-elements and only contains data.
2. <?xml version="1.0" encoding="UTF-8"?>
This line indicates the XML version being used and the character encoding. Though it is possible to leave this line off, it is usually considered good form to include this line in the beginning of the document.
Every XML Document contains one and only one root element. In the case of MTConnect, it is the MTConnectDevices, MTConnectStreams, MTConnectAssets, or MTConnectError element. When these root elements are used in the examples, you will sometimes notice that it is prefixed with mt as in mt:MTConnectDevices.
The mt is what is referred to as a namespace alias and it refers to the urn urn:mtconnect.org:MTConnectDevices:1.2 in the case of an MTConnectDevices document. The urn is the important part and MUST be consistent between the schema and the XML document. The namespace alias will be included as an attribute of the XML element as in:
In the above example, the alias m refers to the MTConnectDevices urn. This document also contains a default namespace on line 4 which is specified with an xmlns attribute without an alias. There is an additional namespace that is always included in all XML documents and usually assigned the alias xsi. This namespace is used to refer to all the standard XML data types prescribed by the W3C. An example of this is the xsi:schemaLocation attribute that tells the XML parser where the schema can be found.
In XML, to allow for multiple XML Schemas to be used within the same XML Document, a namespace will indicate which XML Schema is in effect for this section of the document. This convention allows for multiple XML Schemas to be used within the same XML Document, even if they have the same element names. The namespace is optional and is only required if multiple schemas are required.
<DataItem name=”Xpos” type=”POSITION” subType=”ACTUAL” category=”SAMPLE” />
An element can have any number of sub-elements. The XML Schema specifies which sub-elements and how many times a given sub-element can occur. Here’s an example:
In the above example, the FirstLevel has an sub-element SecondLevel which in turn has two sub-elements, ThirdLevel, with different names. Each level is an element and its children are its sub-elements and so forth.
In XML we sometimes use elements to organize parts of the document. A few examples in MTConnect® are Streams, DataItems, and Components. These elements have no attributes or data of their own; they only provide structure to the document and allow for various parts to be addressed easily.
In the following example DataItems and Components are only used to contain certain types of elements and provide structure to the documents. These elements will be referred to as Containters in the standard.
<Device id=”d” name=”Device”>
<Axes … >…</Axes>
An XML Document can be validated. The most basic check is to make sure it is well-formed, meaning that each element has a closing tag, as in <foo>...</foo> and the document does not contain any illegal characters (<>) when not specifying a tag. If the closing </foo> was left off or an extra > was in the document, the document would not be well-formed and may be rejected by the receiver. The document can also be validated against a schema to ensure it is valid. This second level of analysis checks to make sure that required elements and attributes are present and only occur the correct number of times. A valid document must be well-formed.
All MTConnect® documents must be valid and conform to the XML Schema provided along with this specification. The schema will be versioned along with this specification. The greatest possible care will be taken to make sure that the schema is backward compatible.
- All tag names will be specified in Pascal case (first letter of each word is capitalized). For example: <ComponentEvents />
- Attribute names will also be camel case, similar to Pascal case, but the first letter will be lower case. For example: <MyElement attributeName=”bob”/>
- All values that are part of a limited or controlled vocabulary will be in upper case with an _ (underscore) separating words. For example: ON, OFF, ACTUAL, COUNTER_CLOCKWISE, etc…
- Dates and times will follow the W3C ISO 8601 format with arbitrary decimal fractions of a second allowed. Refer to the following specification for details: http://www.w3.org/TR/NOTE-datetime The format will be YYYY-MM-DDThh:mm:ss.ffff, for example 2007-09-13T13:01.213415. The accuracy and number of decimal fractional digits of the timestamp is determined by the capabilities of the device collecting the data. All times will be given in UTC (GMT).
- XML element names will be spelled-out and abbreviations will be avoided. The one exception is the word identifier that will be abbreviated Id. For example: SequenceNumber will be used instead of SeqNum. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.