content
stringlengths
275
370k
Try these fun hands-on chemistry activities for upper-elementary students. Make green slime, gooey globs, and a slippery “slug” to explore the exciting field of polymer chemistry. The kit includes polyvinyl alcohol, sodium tetraborate, a pipette, polyvinyl acetate, a craft stick, and instruction booklet with step-by-step procedures. Publisher: Carson-Dellosa Publishing Grade 4, 5, 6, 7, 8 Topic: Science - Chemistry
Lesson 11: Indents and Tabs Indenting text adds structure to your document by allowing you to separate information. Whether you'd like to move a single line or an entire paragraph, you can use the tab selector and the horizontal ruler to set tabs and indents Optional: Download our practice document. In many types of documents, you may want to indent only the first line of each paragraph. This helps to visually separate paragraphs from one another. It's also possible to indent every line except for the first line, which is known as a hanging indent. First line indent To indent using the Tab key: A quick way to indent is to use the Tab key. This will create a first-line indent of 1/2 inch. - Place the insertion point at the very beginning of the paragraph you want to indent. Placing the insertion point - Press the Tab key. On the ruler, you should see the first-line indent marker move to the right by 1/2 inch. - The first line of the paragraph will be indented. The indented paragraph If you can't see the ruler, select the View tab, then click the check box next to Ruler. Showing the Ruler In some cases, you may want to have more control over indents. Word provides indent markers that allow you to indent paragraphs to the location you want. The indent markers are located to the left of the horizontal ruler, and they provide several indenting options: To indent using the indent markers: - Place the insertion point anywhere in the paragraph you want to indent, or select one or more paragraphs. Selecting paragraphs to indent - Click, hold, and drag the desired indent marker. In our example, we'll click, hold, and drag the left indent marker. A live preview of the indent will appear in the document. Moving a paragraph - Release the mouse. The paragraphs will be indented. The indented paragraphs To indent using the Indent commands: If you want to indent multiple lines of text or all lines of a paragraph, you can use the Indent commands. - Select the text you want to indent. Selecting text to indent - On the Home tab, click the desired Indent command: - Increase Indent: This increases the indent by increments of 1/2 inch. In our example, we'll increase the indent. - Decrease Indent: This decreases the indent by increments of 1/2 inch. The Indent commands - The text will indent. The indented text To customize the indent amounts, select the Page Layout tab and enter the desired values in the boxes under Indent. Customizing the indent amounts Using tabs gives you more control over the placement of text. By default, every time you press the Tab key, the insertion point will move 1/2 inch to the right. Adding tab stops to the Ruler allows you to change the size of the tabs, and Word even allows you to apply more than one tab stop to a single line. For example, on a resume you could left align the beginning of a line and right align the end of the line by adding a Right Tab. Using two types of alignment on the same line Pressing the Tab key can either add a tab or create a first-line indent, depending on where the insertion point is. Generally, if the insertion point is at the beginning of an existing paragraph, it will create a first-line indent; otherwise, it will create a tab. The tab selector The tab selector is located above the vertical ruler on the left. Hover the mouse over the tab selector to see the name of the active tab stop. The tab selector Types of tab stops include: - Left Tab : Left-aligns the text at the tab stop - Center Tab : Centers the text around the tab stop - Right Tab : Right-aligns the text at the tab stop - Decimal Tab : Aligns decimal numbers using the decimal point - Bar Tab : Draws a vertical line on the document - First Line Indent : Inserts the indent marker on the ruler and indents the first line of text in a paragraph - Hanging Indent : Inserts the hanging indent marker,and indents all lines other than the first line Although Bar Tab, First Line Indent, and Hanging Indent appear on the tab selector, they're not technically tabs. To add tab stops: - Select the paragraph or paragraphs you want to add tab stops to. If you don't select any paragraphs, the tab stops will apply to the current paragraph and any new paragraphs you type below it. Selecting text to tab - Click the tab selector until the tab stop you want to use appears. In our example, we'll select Decimal Tab. Selecting a tab stop - Click the location on the horizontal ruler where you want your text to appear (it helps to click on the bottom edge of the ruler). You can add as many tab stops as you want. Adding a tab stop to the Ruler - Place the insertion point in front of the text you want to tab, then press the Tab key. The text will jump to the next tab stop. Moving text to the Tab stop Removing tab stops It's a good idea to remove any tab stops you aren't using so they don't get in the way. To remove a tab stop, click and drag it off of the Ruler. Removing a tab stop Word can display hidden formatting symbols such as the spacebar (), paragraph (), and Tab key () markings to help you see the formatting in your document. To show hidden formatting symbols, select the Home tab, then click the Show/Hide command. Displaying hidden formatting symbols - Open an existing Word document. If you want, you can use our practice document. - Practice using the Tab key to indent some text. If you're using the example, try indenting the second and third paragraphs of the thank-you letter. - Select a paragraph, and try creating a hanging indent. - Select some text, and use the Increase Indent and Decrease Indent commands to see how they change the text. If you're using our example, practice increasing and decreasing the indent of the text in the Summary section of the resume. - Explore the tab selector and tab stops. If you're using our example, select the text in the Experience section of the resume and add a left tab stop at 3 inches, then align each of the cities to the tab stop.
It is said that Buddhist melodies can be described as being strong, but not fierce; soft, but not weak; pure, but not dry; still, but not sluggish, and able to help purify the hearts of listeners. The teachings or Dharma of the Buddha mention music on many occasions. It is written in an important Buddhist text the Amitabha Sutra, that heavenly singing and chanting is heard all day and night in the world around us: as flowers softly rain down from the heavens; as birds produce beautiful and harmonious music; the blowing of a gentle breeze; the movements of jewel trees… all being played together in harmony, in order to guide sentient beings to enlightenment. In addition to propagating the teachings of the Buddha, this music has long been adapted for use in various ceremonies like weddings, funerals, and so forth. Thus Buddhist Music very much plays a central role in everyday cultural practice among the observant. Chinese Buddhist Music utilizes a rich variety of musical instruments. Other than the inverted bell, thought to have originated in northern India, the instruments used in traditional Chinese Buddhist Music are native to China, and include the gong, large bell (qing), large drum (gu), a resonant wooden block known as the wooden fish, small cymbals, large cymbals and Chinese tambourine. The Development of Buddhist Music As Buddhism spread to Tibet, the Tibetan traditions of Buddhism encouraged the use of song and dance in certain ceremonies. In Tibetan Buddhism’s larger ceremonies, Lamas can be seen utilizing a variety of exotic ceremonial instruments such as specialized types of drums, windpipes, spiral conches, and trumpets. When Buddhism was first introduced into China from northeast India, linguistic differences meant that monastics later recomposed and adapted classical folk songs along with some music commonly played to royalty and officials in the Imperial Court, to give rise to the unique flavour and tradition of Chinese Buddhist Music. Upon the formation of the Republic of China in 1912, the general public seemed to lose its affection for Buddhist Music, and fewer monastics continued the work of writing new compositions. However, in 1930 Masters at the Xiamen City Minnan Buddhist Institute made a call to all Buddhist disciples to preserve and carry on the legacy of Buddhist Music, in order to propagate spiritual education. He further believed that if music could help spread the Dharma, then it would significantly impact the diversity and richness of religious education among the people. At about the same time a recording called “The Qingliang Selection was produced, prior to which most people had limited exposure to Buddhist Music and therefore it did not enjoy widespread popularity. During the 1950s, many monastics worked diligently to compose the words for new songs with the help of a number of notable musicians Yang Yongpu, Li Zhonghe, and Wu Juche. A selection of the songs they composed was recorded by Fo Guang Shan and released in an album entitled Fo Guang Hymn Collection. Their efforts serve as a great inspiration to those wishing to carry on work in this field. The Contributions of Buddhist Music Hymns are used in ceremonies for making offerings or inviting the presence of Buddha and Bodhisattvas. Beautiful compositions such as the solemn Incense Offering Prayer, the Incense Prayer for Upholding the Precepts, and the Prayer for Offerings Made to Celestial Beings embody the virtues of respect and piety. Characterized by a relaxed and easy tempo, soft intonation, and a dignified, solemn manner, Buddhist fanbei elegantly express the five virtuous qualities that are sincerity, elegance, clarity, depth, and equanimity. It is held that regularly listening to Buddhist fanbei can give the following five graces: a reduction in physical fatigue, less confusion and forgetfulness, a reduction in mental fatigue, greater eloquence, and greater ease in expression and communication. In the practice of Buddhism, fanbei has an important role in daily living, for example in repentance ceremonies. It is not designed to try to elevate or excite the emotions of participants or practitioners, but in fact aims to help conserve emotional energy, calm the thinking, lessen desire, and allow practitioners to see their true nature with clarity. Modernization of Buddhist Music The lifestyle common to most people today is busy and quite stressful. With many people seeming to have no place to take any kind of spiritual refuge it can often become quite easy for them to become lost in themselves. However, the pure and clear sounding melodies of Buddhist Music aim to provide a way to communicate the higher spiritual states of mind that are advocated by the Dharma, and can serve to enrich and re-energize the hearts of people. With communications science constantly improving, the feeling is to make optimal use of the technology to find more efficient means to give Buddhist Music wider exposure, such as through the use of electronic broadcasting media including television and radio stations to break through the barriers of differences in cultural backgrounds, social customs, and languages.
Learn About Hurricanes Hurricanes are violent tropical storms with sustained winds of at least 74 mph. They form over warm ocean waters – usually starting as storms in the Caribbean or off the west coast of Africa. As they drift slowly westward, the warm waters of the tropics fuel them. Warm, moist air moves toward the center of the storm and spirals upward. This releases torrential rains. As updrafts suck up more water vapor, it triggers a cycle of strengthening that can be stopped only when contact is made with land or cooler water. Hurricane season is typically from June 1st to November 30th. HURRICANE TERMS TO REMEMBER: - Tropical Depression - an organized system of clouds and thunderstorms with a defined circulation and maximum sustained winds of 38 mph (33 knots) or less. - Tropical Storm - an organized system of strong thunderstorms with a defined circulation and maximum sustained winds of 39 to 73 mph (34-63 knots). - Hurricane - a warm-core tropical cyclone with maximum sustained winds of 74 mph (64 knots) or greater. - Eye - center of a hurricane with light winds and partly cloudy to clear skies. The eye is usually around 20 miles in diameter, but can range between 5 and 60 miles. - Eye Wall - location within a hurricane where the most damaging winds and intense rainfall are found. - Category I - 74-95 mph winds with 4-5 ft. storm surge and minimal damage - Category II - 96-110 mph winds with 6-8 ft. storm surge and moderate damage - Category III - 111-130 mph winds with 9-12 ft. storm surge and major damage - Category IV - 131-155 mph winds with 13-18 ft. storm surge and severe damage - Category V - 155+ mph winds with 18+ ft. storm surge and catastrophic damage - Tropical Storm Watch - issued when tropical storm conditions may threaten a specific coastal area within 36 hours, and when the storm is not predicted to intensify to hurricane strength. - Tropical Storm Warning - winds in the range of 39 to 73 mph can be expected to affect specific areas of a coastline within the next 24 hours. - Hurricane Watch - a hurricane or hurricane conditions may threaten a specific coastal area within 36 hours. - Hurricane Warning - a warning that sustained winds of 74 mph or higher associated with a hurricane are expected in a specified coastal area in 24 hours or less.
The progressive disappearance of seed-dispersing animals like elephants and rhinoceroses puts the structural integrity and biodiversity of the tropical forest of South-East Asia at risk. With the help of Spanish researchers, an international team of experts has confirmed that not even herbivores like tapirs can replace them. "Megaherbivores act as the 'gardeners' of humid tropical forests: They are vital to forest regeneration and maintain its structure and biodiversity", as was explained to SINC by Ahimsa Campos-Arceiz, the lead author of the study that was published in the 'Biotropica' journal and researcher at the School of Geography of the University of Nottingham in Malaysia. In these forests in East Asia, the large diversity of plant species means that there is not enough space for all the trees to germinate and grow. As well as the scarce light, seed dispersion is made more complicated by the lack of wind due to the trees that are up to 90 metres high. Plant life is then limited to seeds dispersed by those animals that eat pulp. They either scatter seeds by dropping their food, regurgitating it or by defecating later on. The Asian elephant (Elephas maximus) occupies just 5% of its historical range and its range will likely continue to shrink as more forests get cut down. With the input of hundreds of experts worldwide, the primate review provides scientific data to show the severe threats facing animals that share virtually all DNA with humans. In both Vietnam and Cambodia, approximately 90 percent of primate species are considered at risk of extinction. Populations of gibbons, leaf monkeys, langurs and other species have dwindled due to rampant habitat loss exacerbated by hunting for food and to supply the wildlife trade in traditional Chinese medicine and pets. A team of researchers from Singapore, Australia, Switzerland, the UK and the USA has carried out a comprehensive assessment to estimate the impact of disturbance and land conversion on biodiversity in tropical forests. In a recent study published in Nature, they found that primary forests – those least disturbed old-growth forests – sustain the highest levels of biodiversity and are vital to many tropical species. As the human population grows toward 9 billion people and Asia continues to industrialize habitat loss is going to drive many more species to extinction. A decade ago, the saiga antelope seemed so secure that conservationists fighting to save the rhino from poaching suggested using saiga horn in traditional Chinese medicines as a substitute for rhino horn. In 1993, over a million saiga antelopes roamed the steppes of Russia and Kazakhstan. Today, fewer than 30,000 remain, most of them females. So many males have been shot for their horns, which are exported to China to be used in traditional fever cures, that the antelope may not be able to recover unaided. By way of Alex Tabarrok. The demand for folk medicine in China is wiping out lots of species. The World Conservation Union says most of the bear species are threatened with extinction and Chinese medicine is one of the causes. Six of the world's eight species of bear are threatened with extinction, according to a report from the World Conservation Union (IUCN). The smallest species of bear, the sun bear, has been included on the list for the first time, while the giant panda remains endangered, despite comprehensive conservation efforts in China. China is going to keep industrializing and Chinese buying power for parts of endangered species is going to keep rising. At the same time, deforestation driven by Asian industrialization, population growth, and other factors will continue. The main threat to bears across south-east Asia comes from poaching. Although illegal, poachers are prepared the risk the small chance of being caught against the lucrative gains they can make from sales on the black market. Prized bear body parts include the gall bladder, which is used in traditional Chinese medicine, and their paw, which is considered to be a delicacy. Another threat to bear populations comes from living in close proximity to human settlements. Bears are often killed when they prey on livestock or raid crops, or killed when the roam too close to a village because they are seen as a threat to human safety. The Chinese raise incarcerated bears to extract bile for medicine. That probably saves some of the wild bears from death. But animal rights campaigners oppose using captive bears for bile extraction. Even with a new state-approved "free drip" method of extracting bile, China's incarcerated bears lead miserable, pain-wracked lives, said campaigner Jill Robinson, who says she won't rest until the 7,000 bears kept on China's farms are free. In the past 100 years, tiger populations around the world have declined by 95 percent. In India, home to at least half of the world’s tigers, only an estimated 1,500 remain, a decline of more than 50 percent since 2001, according to the government-run National Tiger Conservation Authority. In the past six years, it is believed, tigers have been killed at a rate of nearly one a day. Over the next 20 years, the tiger population could “disappear in many places, or shrink to the point of ecological extinction,” according to a 2006 report by the World Wildlife Fund and the Smithsonian National Zoological Park in Washington. Several factors have contributed to the decline in India, including a growing human population. There is also a demand for tiger parts from places such as China, where tiger skins priced at $12,000 and more are used for luxury clothes and wall hangings, and where equally pricey tiger bones are used in traditional medicines. Compounding the problem, wildlife activists say, is a pro-development Indian government more concerned with the economy than the environment. More than 30 per cent of the world's amphibians, 23 per cent of mammals and 12per cent of birds are now threatened with extinction. More than 75 per cent of fish stocks are fully or overly exploited. Six in 10 of the world's leading rivers have been either dammed or diverted. One in 10 of these rivers no longer reaches the sea for part of the year. More than two million people die prematurely every year from indoor and outdoor pollution. Less than 1 per cent of the world's marine ecosystems are protected. Humans are like "a plague of ravenous insects". Humans affect, and are affected by, the environment to an enormous degree. The GEO-4 report includes a number of disquieting statistics on humanity. The global population has grown by 1.7 billion in the 20 years since 1987, to a grand total of 6.7 billion. And these 6.7 billion humans consume like a plague of ravenous insects. One small example noted in the report: every year, 1.1million to 3.4million tonnes of undressed wild animal meat, or bushmeat, is eaten by people living in the Congo basin. Except the insects serve as food for birds and other animals. Humans are on the top of food chains. Humanity's footprint on ecosystems keeps getting larger. This can't continue indefinitely. All exponential trends must stop eventually. I would like this one to stop short of ecological disaster.
Words of more than one syllable often have a weak, unclear vowel sound in any syllable which is not stressed. This weak vowel is called the schwa sound. To review this, look back at Unit 24. |When the schwa sound comes before l, it makes this sound:| |If the ending is a suffix, the spelling is al. For example:| When the ending is not a suffix, the most common way of spelling the /Əl/ sound is le but some words use al or el. Some examples: |le as in eagle||al as in oval||el as in camel| |Note: Only a few words use ol or il to spell the /l/ sound at the end of a word. These are listed on pages 5 and 6.| |Go to the next page: When to use the le ending.|
Japanese units of measurement The system is Chinese in origin. The units originated in the Shang Dynasty in the 13th century BC, and eventually stabilized in the Zhou Dynasty in the 10th century BC and spread from there to Japan, South East Asia, and Korea. The units of the Tang Dynasty were officially adopted in Japan in 701, and the current shaku measurement has hardly altered since then. Many Taiwanese units of measurement are derived from the shakkanhō system. From 1924, the shakkanhō system was replaced by the metric system, and use of the old units for official purposes was forbidden after 31 March 1966. However, in several instances the old system is still used. In carpentry and agriculture use of the old-fashioned terms is common. Tools such as Japanese chisels, spatels, saws, hammers are manufactured in sizes of sun and bu. Land is sold on the basis of price in tsubo. Until the 2005 Japanese census, people were able to give the area of their houses in either square metres or tsubo. The tsubo was not used in the 2010 census. There are several different versions of the shakkanhō. The tables below show the one in common use in the Edo period. In 1891 the most common units were given definitions in terms of the metric system: |Note: Definitions are exact and conversions are rounded to four significant figures.| The basis of the shakkanhō length measurements is the shaku, which originated in ancient China. The other units are all fixed fractions or multiples of this basic unit. The shaku was originally the length from the thumb to the middle finger (about 18 cm or 7.1 in), but its length, and hence the length of the other units, gradually increased, since the length of the unit was related to the level of taxation. Various shaku developed for various purposes. The unit of all measurement, such as area, is shaku. To distinguish from other shaku, this unit is called the kanejaku (曲尺). Kanejaku means "carpenter's square", and this shaku was used by Japanese carpenters. The carpenter's shaku, used for construction, preserved the original Chinese shaku measurement, because it was never altered, whereas the other shaku systems, which were used for taxation or trade, were altered to increase taxation, and, hence, gradually deviated from the original value. The kujirajaku (鯨尺), literally "whale shaku", was a standard used in the clothing industry. The name, "whale shaku", comes from the rulers, which were made from baleen. A kujirajaku is 25% longer than kanejaku. As well as the kanejaku and kujirajaku systems, other shaku systems also existed. One example is the gofukujaku (呉服尺), which refers to traditional Japanese clothing, such as kimonos. In the gofukujaku system, one shaku equals 1.2 times the kanejaku. Shaku units are still used for construction materials in Japan. For example, plywood is usually manufactured in 182 cm × 91 cm (about 72 in × 36 in) sheets known in the trade as saburokuhan (3 × 6版), or 3 × 6 shaku. Each sheet is about the size of one tatami mat. The thicknesses of the sheets, however, are usually measured in millimetres. The names of these units also live in the name of the bamboo flute shakuhachi (尺八), literally "shaku eight", which measures one shaku and eight sun, and the Japanese version of the Tom Thumb story, Issun Bōshi (一寸法師), literally "one sun boy", as well as in many Japanese proverbs. Note: There is an older type of 'ri', about 600 m. This can be seen in use, for example, in beach names. Kujukuri Beach is 99 ri (kyu ju ku), about 60 km. Shichiri Beach is 7 ri (shichi) 4.2 km. While this use is evidence of the existence of the 'old' ri, information about it in English is hard to come by. The tsubo, which is essentially the area of two standard sized tatami mats (tatami have an aspect ratio of 2:1, so two side by side form a square), is still commonly used in discussing land pricing in Japan. Note that actual tatami vary in size regionally, though legally the area of a tsubo is standardized. The larger units are also commonly used by Japanese farmers for discussing the sizes of fields. These units are still used, for example, in sake production. |Unit||shō||Metric||US liquid measure||Imperial| |Romanized||Kanji||millilitres||litres||fluid ounces||pints||gallons||fluid ounces||pint||gallons| The Japanese unit of mass, momme, is a recognized unit in the international pearl industry. |kan or kanme||貫, 貫目||1000||3.75×106||3750||3.75||2116||132.3||8.267| The names of old money live on in Japanese proverbs such as haya oki wa san mon no toku, literally "Waking early gets you three mon", comparable to the English language proverb, "Early to bed, early to rise makes a man healthy, wealthy, and wise." |1 hiki||疋||10 mon| |1 kanmon||貫文||100 hiki| Apart from shakkanhō and the metric system, other units are also commonly used in Japan. For example, the inch is used in the following: - The tyre sizes of bicycles, which are based on a British system - In the computer industry, for the sizes of parts, connectors, and semiconductor wafers. - Together with feet, for the width and length of magnetic tape. - The sizes of television and monitor/phone screens. However, the character 型 ("-gata") is substituted for インチ ("inch") on televisions. Thus, a television with a 17-inch diagonal measure is described as 17型. If the television screen is a 16:9 ratio, the letter "V" (for "vista") prefixes the character 型 ("-gata"). For example. 32V型 means 32-inch widescreen. - The sizes of photographic prints, though rounded to the nearest millimetre. - History of measurement - Japanese clock - Japanese counter word - Japanese numerals - Units of measurement - "改正度量衡法規". National Diet Library. Retrieved 7 July 2010. - "メートル条約". International Metrology Cooperation Office. Retrieved 7 July 2010. - Ministry of Railway (鉄道省 Tetsudō-shō) ([大正10]). Nippon (or Nihon) Tetsudō-shi 日本鉄道史 [Japan Railway History] (in Japanese). 1 of 3 (上巻 Jōkan). [Tokyo] : [Ministry of Railway]. p. 49. Check date values in: |date=(help) "In the 10th month of Meiji 3, probably November 1871, we defined 1 English foot of railway as 1 shaku 4 rin (1.004 shaku) of ours." - "Chōbu" is used when no fraction follows - Japanese Carpentry Museum - Japanese units (Japanese) - Convert traditional Japanese units to metric and imperial units (lengths, areas, volumes, weights) (sci.lang.Japan FAQ pages) - Simple Japanese Traditional Area Units Converter - Simple Japanese Distance and Length Units Converter
On March 18, 2011, the MESSENGER spacecraft entered orbit around Mercury to become that planet's first orbiter. Mercury, the last frontier of planetary exploration that NASA will reach for quite some time, "is the last of the classical planets, the planets known to the astronomers of Egypt and Greece and Rome and the Far East,” said Sean C. Solomon of the Carnegie Institution of Washington, the mission’s principal investigator. “It’s an object that has captivated the imagination and the attention of astronomers for millennia.” "We are assembling a global overview of the nature and workings of Mercury for the first time," remarked MESSENGER principal investigator Sean Solomon of the Carnegie Institution. "and many of our earlier ideas are being cast aside as new observations lead to new insights. Our primary mission has another three Mercury years to run, and we can expect more surprises as our Solar System's innermost planet reveals its long-held secrets." The spacecraft's instruments are making a complete reconnaissance of the planet's geochemistry, geophysics, geologic history, atmosphere, magnetosphere, and plasma environment. MESSENGER is providing a wealth of new information and some surprises. For instance, Mercury's surface composition differs from that expected for the innermost of the terrestrial planets, and Mercury's magnetic field has a north-south asymmetry that affects interaction of the planet's surface with charged particles from the solar wind. Tens of thousands of images reveal major features on the planet in high resolution for the first time. Measurements of the chemical composition of the planet's surface are providing important clues to the origin of the planet and its geological history. Maps of the planet's topography and magnetic field are offering new evidence on Mercury's interior dynamical processes. And scientists now know that bursts of energetic particles in Mercury's magnetosphere are a continuing product of the interaction of Mercury's magnetic field with the solar wind. "MESSENGER has passed a number of milestones just this week," offered Sean Solomon. "We completed our first perihelion passage from orbit on Sunday, our first Mercury year in orbit on Monday, our first superior solar conjunction from orbit on Tuesday, and our first orbit-correction maneuver on Wednesday. Those milestones provide important context to the continuing feast of new observations that MESSENGER has been sending home on nearly a daily basis." Images obtained with MESSENGER's Mercury Dual Imaging System (MDIS) are being combined into maps for the first global look at the planet under optimal viewing conditions. New images of areas near Mercury's north pole orbital show that region hosts one of the largest expanses of volcanic plains deposits on the planet, with thicknesses of up to several kilometers. The broad expanses of plains confirm that volcanism shaped much of Mercury's crust and continued through much of Mercury's history, despite an overall contractional stress state that tended to inhibit the extrusion of volcanic material onto the surface. Among the fascinating features seen in flyby images of Mercury were bright, patchy deposits on some crater floors, but they remained a curiosity. New targeted MDIS observations reveal these patchy deposits to be clusters of rimless, irregular pits with horizontal dimension from hundreds of meters to several kilometers. These pits are often surrounded by diffuse halos of higher-reflectance material, and they are found associated with central peaks, peak rings, and rims of craters. "The etched appearance of these landforms is unlike anything we've seen before on Mercury or the Moon," says Brett Denevi, a staff scientist at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md., and a member of the MESSENGER imaging team. "We are still debating their origin, but they appear to have a relatively young age and may suggest a more abundant than expected volatile component in Mercury's crust." The X-Ray Spectrometer (XRS) has made several important discoveries since orbit insertion. The magnesium/silicon, aluminum/silicon, and calcium/silicon ratios averaged over large areas of the planet's surface show that, unlike the surface of the Moon, Mercury's surface is not dominated by feldspar-rich rocks. XRS observations have also revealed substantial amounts of sulfur at Mercury's surface, lending support to suggestions from ground-based observations that sulfide minerals are present. This discovery suggests that Mercury's original building blocks may have been less oxidized than those that formed the other terrestrial planets and could be key to understanding the nature of volcanism on Mercury. MESSENGER's Gamma-Ray and Neutron Spectrometer detected the decay of radioactive isotopes of potassium and thorium, and researchers have determined the bulk abundances of these elements. "The abundance of potassium rules out some prior theories for Mercury's composition and origin," says Larry Nittler, a staff scientist at the Carnegie Institution. "Moreover, the inferred ratio of potassium to thorium is similar to that of other terrestrial planets, suggesting that Mercury is not highly depleted in volatiles, contrary to some prior ideas about its origin." MESSENGER's Mercury Laser Altimeter has been mapping the topography of Mercury's northern hemisphere in detail. The north polar region, for instance, is a broad area of low elevations. The overall topographic height range seen to date exceeds 9 kilometers (5.5 miles). Previous Earth-based radar images showed that around Mercury's north and south poles are deposits thought to consist of water ice and perhaps other ices preserved on cold, permanently shadowed floors of high-latitude impact craters. MESSENGER's altimeter is measuring the floor depths of craters near the north pole. The depths of craters with polar deposits support the idea that these areas are in permanent shadow. The geometry of Mercury's internal magnetic field can potentially allow the rejection of some theories for how the field is generated. The spacecraft found that Mercury's magnetic equator is well north of the planet's geographic equator. The best-fitting internal dipole magnetic field is located about 0.2 Mercury radii, or 480 km (298 miles), northward of the planet's center. The dynamo mechanism responsible for generating the planet's magnetic field therefore has a strong north-south asymmetry. As a result of this north-south asymmetry, the geometry of magnetic field lines is different in Mercury's north and south polar regions. In particular, the magnetic "polar cap" where field lines are open to the interplanetary medium is much larger near the south pole. This geometry implies that the south polar region is much more exposed than the north to charged particles heated and accelerated by the solar wind. The impact of those charged particles onto Mercury's surface contributes both to the generation of the planet's tenuous atmosphere and to the "space weathering" of surface materials, both of which should have a north-south asymmetry. One of the major discoveries made by Mariner 10 flybys of Mercury in 1974 were bursts of energetic particles in Mercury's Earth-like magnetosphere. Four bursts of particles were observed on the first flyby, so it was puzzling that no such events were detected by MESSENGER during any of its three flybys. "While varying in strength and distribution, bursts of energetic electrons—with energies from 10 kiloelectron volts (keV) to more than 200 keV—have been seen in most orbits since orbit insertion," said MESSENGER Project Scientist Ralph McNutt, of APL. "The Energetic Particle Spectrometer has shown these events to be electrons rather than energetic ions, and to occur at moderate latitudes. The latitudinal location is entirely consistent with the events seen by Mariner 10." With Mercury's smaller magnetosphere and with the lack of a substantial atmosphere, the generation and distribution of energetic electrons differ from those at Earth. One candidate mechanism for their generation is the formation of a "double layer," a plasma structure with large electric fields along the local magnetic field. Another is induction brought about by rapid changes in the magnetic field, a process that follows the principle used in generators on Earth to produce electric power. The mechanisms at work will be the studied over the coming months. The Daily Galaxy via Carnegie Institution
Describes Avogadro’s hypothesis and defines the standard temperature and pressure for measuring amounts of gases. A video tutorial of Avogadro's Law (Hypothesis) and the conversion between moles and volume. Courtesy of Tyler DeWitt Reviews Avogadro’s hypothesis, the definition of standard temperature and pressure, and the determination of molar volume for gases. Practice Avogadro's Hypothesis and Molar Volume questions To activate prior knowledge, to generate questions about a given topic, and to organize knowledge using a KWL Chart. To describe images using words using the Visual Literacy strategy. A study guide on Avogadro's hypothesis, gas volume conversions, and gas density. Shows an experiment that demonstrates Avogadro's hypothesis. The video shows the relationship between gas volume, number of particles and mass.
Theme III.a: Micromorphological characteristics of plant drugs (vegetal drugs, herbal drugs) Histology is the microscopic study of plants and animals tissues. Tissues are groups of cells having the same function and working coordinately with each other. The histology progress has been slow until the nineteenth century, in which the microscope began to acquire a form similar to the present and the microtome, an instrument that allows make very thin tissue sections, was invented by the Czech physiologist Jan Evangelista Purkinje. In 1907, American biologist Ross Granville Harrison discovered that living tissue could be grown, i.e. grow outside its original organ. Histology was facilitated by the development in early twentieth century of the electron microscope and by the introduction in 1968 of the scanning electron microscope, as well as a large number of advances carried out on the design of microscopes in recent years. The techniques of histochemistry and cytochemistry are closely related and have to do with the research of chemical activity taking place in cells and tissues. For example, the presence of certain colors within the cells may indicate the type of chemical reaction that has taken place. The histochemical methods are very useful in the study of enzymes, catalyst substances which control and lead many cellular activities.
Number Words Exercise In this number recognition instructional activity, students examine 22 numeric numbers and fill in the blanks with the complimentary number words. 3 Views 3 Downloads Math Stars: A Problem-Solving Newsletter Grade 2 Develop the problem solving skills of your young learners with this collection of math newsletters. Covering a variety of topics ranging from simple arithmetic and number sense to symmetry and graphing, these worksheets offer a nice... 2nd - 4th Math CCSS: Adaptable Math Stars: A Problem-Solving Newsletter Grade 1 Keep the skills of your young mathematicians up-to-date with this series of newsletter worksheets. Offering a wide array of basic arithmetic, geometry, and problem solving exercises, this resource is a great way to develop the critical... 1st - 3rd Math CCSS: Adaptable Grade 1 Supplement Set A4 Number & Operations: Equivalent Names Engage young mathematicians in developing their basic arithmetic skills with these great hands-on activities. Using Unifix® cubes to model a variety of single digit addition and subtract problems, children build a basic understanding of... K - 2nd Math CCSS: Adaptable
For students to truly embrace their learning, they need to be an active part of the feedback process. Feedback needs to be ongoing and collaborative between the teacher and student. Most often, feedback starts with a grade on a particular assessment. However, the feedback process can truly begin before an assessment is actually administered. Before administering our Quarter 1 assessments in our high school science classes, teachers discussed our department’s philosophy behind the assessment. This philosophy was disseminated to all science students by their teachers and included the following information: - The Quarter 1 exam is a district-wide assessment that has been created/reviewed by the teachers. - It consists of 30 multiple choice questions on material that has already been covered. - Scores on this assessment reflect a snapshot of where each student is at this point. - Teachers will run an item analysis and analyze the results to identify concepts that were difficult for students to master. - On our Superintendent’s Conference Day, teachers will work in teams to construct learning activities that will be conducted during a “GAP Day.” These activities will be designed to reteach or disseminate the information in a new way, thereby closing the learning GAPs. - Activities will be in the form of learning stations. - Concepts will be on future assessments to reevaluate achievement. - Our goal as a science department is to allow all students to be successful in all areas of the curriculum. Sharing philosophy is important for students to understand that an assessment is not the last step of the learning process. It guides both students and teachers to: 1) identify areas for improvement 2) brainstorm ways to reteach/relearn the material Before teachers can give authentic feedback to students, they themselves need to analyze the results of an assessment and reflect on the teaching/learning process that has taken place. Let’s first think of what analysis really means. Analysis can be defined as methodically examining something in detail and/or separating material into constituent parts. Over the past 20+ years of my teaching experiences, I have found that my own analysis has evolved. Specifically, it has become more in depth, and I have picked up on some creative ideas from colleagues. The first type of analysis that can be done is what I like to refer as chunking the data. For example, if a Science teacher has 3 separate classes taking the first quarter exam, they may run all scantrons through for grading and an item analysis for the group as a whole. This would give information such as the most commonly missed questions, questions that all students may have answered correctly and those questions that fall somewhere in between. One could then reflect on the material that may have been difficult to grasp, or perhaps, a question that was difficult to interpret. To gather more information on this chunked data, this teacher could then share results with another teacher who has given the same assessment. They may find similarities in the results. If this is the case, they can work together to brainstorm ways to reteach a particular concept. If, however, one teacher’s students really struggled on a particular item, and another teacher’s students excelled on the same concept, the teacher who had success with the concept can share how their students learned the concept. Perhaps it was taught hands-on, or in a variety of ways to reach all learning styles, or maybe the teacher just knew from experience that students struggled with this particular concept in the past and they provided more practice problems in this area. The teacher can then go back and actually provide this specific feedback to students. Some examples of this feedback could be: - Many of you did not get this concept correct. You will have the opportunity to have more practice with this concept. - A bunch of you did not get this concept correct. Upon reflection of how it was taught, I realized it is a difficult concept and you could all use a different approach to learning about this topic. I have provided some videos for you to watch, or I constructed a hands-on experiment to illustrate the concept. - Number 18 was definitely a challenge. I believe that you all understand the concepts/facts needed to answer this question. However, it was a difficult question to interpret. I will be providing opportunities for you to practice questions like this one, and give more strategies for you to interpret what is being asked. Notice that for examples 2 & 3, the teacher is disclosing that perhaps they as teachers can do more to raise achievement. It is important for students to realize that student success is team work between the teacher and student. Disseminating feedback in this way is an important part of the process. In addition to chunking all students’ scores together, results can be broken down and analyzed in a variety of ways. Each class can be analyzed separately, as feedback may differ between classes. If a teacher has a heterogeneous class, perhaps they may want to run certain analysis separately. I have found that special education students have the same potential to achieve as all other students. However, the way they learn is often different. One could separately analyze the results of struggling students. When the data is chunked, it may appear that most of the students mastered a particular concept. However, what if all the students who got this concept wrong were special education students? Analyzing their results separately could help to identify different approaches to a topic. The teacher can then work with these students in a group and individualize feedback in a positive way. Feedback can begin with the explanation of assessments followed by the disclosure of what can be done differently to enhance learning. The culture of a classroom should include the understanding that an assessment is not the end of the learning process, but rather a tool that can enhance achievement. Debbie Langone is currently the Science Chairperson at East Meadow High School in Long Island, New York. Her 24 years of experience in the field of education spans from elementary to college level. Her philosophy is that all students should be in an environment where they can achieve and there should never be a ceiling for expectations. Rather, individual maximum potential should be a variable that changes with each topic and grows throughout their education. Debbie Langone is a past recipient of the Distinguished Teacher Award from the Harvard Club of Long Island.
well it gives me a sequence 7,11,15,19 and say calculate the difference between the successive term then asks to determine the formula that generates the sequence. any idea on how to do it or an easy explanation for a 14 yr old boy (Worried) and im actually from england :P You have a sequence, this means an ordered list of numbers Let us call the first U1, the second U2, the third U3, the fourth U4 and so one. The Nth one will be UN. Here you have: U1=7 U2=11 U3=15 U4=19 And so more generally: UN-UN-1= 4 which can be written: UN=UN-1+ 4 And this is the formula that generates the sequence. This kind of sequence has an important role in mathematics and in practical life. The formula for the nth term of an arithmetic sequence is given by the expression , where is the first term, is the term of the sequence, and is the common difference. Remember that Second term First term. You have been asked to find . You know . Don't plug in anything for ! Without , the formula will be a pure number! To check your answer, plug in different values for : When , the first term is . When , the second term is . When , the third term is . When , the fourth term is . This is the same as the above sequence. I hope that helps. :)
The procedure of amendment makes the Constitution of India neither totally rigid nor totally flexible, rather a curious mixture of both. Some provisions can be easily changed and for some others, special procedures are to be followed. Despite the fact that India is a federal state, the proposal for amending the constitution can be initiated only in the House of the Union Legislature and the State Legislatures have no such power. In case of ordinary legislation, if both houses of the Parliament disagree, a joint session is convened. But in case of amendment of bills, unless both the houses agree, it cannot materialize, as in such cases there is no provision for convening the joint session of both the Houses of the Parliament. In fact, there are three methods of amending the Constitution. But Article 368 of the constitution which lays down the procedure for amendment mentions two methods. 1) An amendment of the constitution may be initiated only by the introduction of a Bill for the purpose in either house of Parliament and when a bill is passed in each house. i) by a majority of total membership of that house. ii) by a majority of not less than two-thirds of the members of that house present and voting, it shall be presented to the President who shall give his assent to the Bill and there upon the Constitution shall stand amended in accordance with the term of the Bill. Most of the provisions of the constitution can be amended by this procedure. 2) For amending certain provisions a special procedure to be followed, (i) a Bill for the purpose must be passed in each house of Parliament by a majority of total membership of the house, (ii) by a majority of not less than two-thirds of the members of that house present and voting and (iii) it should be notified by the legislatures of not less than one-half of the states before the Bill is presented to the President for assent. The provisions requiring this special procedure to be followed include- (a) manner of the election of the President, (b) matters relating to the executive power of the union and of the state, (c) representation of the states in Parliament (d) matters relating to the Union Judiciary and High Courts in the states (e) distribution of legislative powers between the union and the states (f) any of the list in the seventh schedule (g) provisions of Article 368 relating to the procedure for amendment of the constitution etc. 3) There are certain provisions which require simple majority for amendments. They can be amended by the ordinary law making process. They include (a) formation of new states and alteration of areas, boundaries or names of existing ones (b) creation or abolition of Legislative Councils in the states (c) administration and control of scheduled areas and scheduled Tribes (d) the salaries and allowances of the Supreme Court and High Court Judges (e) laws regarding citizenship etc. It is significant that the laws passed by Parliament to change the above provisions would not be deemed to be amendments of the Constitution for the purpose of Article 368. (If still there is a problem in understanding, questions can be asked in comments)
Name: Laura Maria W. Date: October 31, 2003 Why do seismic waves bend when they reach the boundary between the Earths mantle and core? Why do they then carry on bending as they go through the mantle? The "long" answer is complicated. The short answer is that the speed of the wave depends upon the properties of the medium through which it is traveling -- particularly the density of the medium. Just as light is "bent" when it passes through glass or water because the speed of light is different in different media, so too seismic waves are "bent" when they pass through media of different densities. Click here to return to the Environmental and Earth Science Archives Update: June 2012
A West Virginia University physicist and his colleagues have discovered hundreds of previously unknown sites of massive star formation in the Milky Way, including the most distant such objects yet found in our home galaxy. Ongoing studies of these objects promise to give crucial clues about the structure and history of the Milky Way. Loren Anderson, assistant professor of physics at WVU, and his colleagues, Thomas Bania of Boston University and Dana Balser at the National Radio Astronomy Observatory, found regions where massive young stars or clusters of such stars are forming. These regions, which astronomers call HII (H-two) regions, serve as markers of the galaxy’s structure, including its spiral arms and central bar. HII regions are ionized zones around very massive stars. The stars powering HII regions are more than 20 times the mass of the Sun. Anderson has created a catalogue of all these regions that allows scientists to better characterize the statistical properties of HII regions, trace galactic structure, determine differences in star formation properties in a variety of environments, compare our galaxy with other galaxies in the Universe, and examine the impact of evolved HII regions in triggering the creation of second-generation stars. “The problem to this point had been that it was very difficult to get a complete sample because we did not have a survey of the whole sky that could find all of the HII regions in the galaxy,” Anderson explained. “NASA launched the WISE satellite a few years ago and the data were released this past March. The WISE all-sky survey at infrared wavelengths, where these HII regions emit a lot of energy, allowed me to compile a complete sample of HII regions in the galaxy for the first time.” Anderson reports that data from WISE (Widefield Infrared Survey Explorer) shows about 2,000 new HII-region candidates that the team is studying. The three men presented their work to the American Astronomical Society’s meeting in Long Beach, Calif. “We’re vastly improving the census of our galaxy, and that’s a key to understanding both its current nature and its past history, including the history of possible mergers with other galaxies,” Bania said. The astronomers are using the National Science Foundation’s Robert C. Byrd Green Bank Telescope in Green Bank, W.Va., and the Arecibo Telescope in Puerto Rico, and data from NASA’s Spitzer and WISE satellites. They plan to expand the effort to include Australian radio telescopes. The effort began with a survey of the Milky Way using the Green Bank Telescope. Anderson and his colleagues looked for HII regions by seeking faint emission of hydrogen atoms at radio wavelengths that are un-obscured by the dust in the galaxy’s disk. By detecting these emissions, dubbed radio recombination lines, or RRLs, the Green Bank Telescope survey more than doubled the number of known HII regions in the Milky Way. They continued that work using the Arecibo Telescope, finding additional objects, including the largest HII region yet known, nearly 300 light-years across. Data from previous surveys with radio and infrared telescopes, including Spitzer and WISE, helped to guide the new search. Later work analyzed similar emissions of helium and carbon atoms. “The great sensitivity of the GBT and the Arecibo telescope, along with advanced electronics, made our new surveys possible,” Balser said. The work so far has helped refine astronomers’ understanding of the galaxy’s structure. They found concentrations of star formation in poorly understood distant spiral arms and at the end of the galaxy’s central bar. Another major focus of the surveys is to study chemical variations in different regions of the galaxy. Variations in the abundance of elements heavier than hydrogen can trace the history of star formation, and also indicate regions possibly containing material incorporated into the galaxy through mergers with other galaxies throughout its history. “We’ve already been surprised to learn that the thin, tenuous gas between the stars is not as well-mixed as we thought,” Balser said. “Finding areas that are chemically different from their surroundings can point to where gas clouds or smaller galaxies may have fallen into the Milky Way.” “Just as geologists traverse the landscape, mapping different rock types to reconstruct the Earth’s history, we’re working to improve the map of our galaxy to advance our understanding of its structure and its history,” Bania said. This research was made possible by a grant from NASA to Loren Anderson worth $255,083. For more information contact Loren Anderson at Loren.Anderson@mail.wvu.edu. CONTACT: Rebecca Herod, Director of Marketing and Communication Follow @WVUToday on Twitter.
Construction paper (sugar paper) is a tough, coarse, coloured paper. The texture is slightly rough, and the surface is unfinished. Due to the source material, mainly wood pulp, small particles are visible on the paper’s surface. It is used for projects or crafts. The origin of the term "sugar paper" lies in its use for making bags to contain sugar. It is related to the "blue paper" used by confectionery bakers from the 17th century England onwards; for example, in the baking of Regency ratafia cakes (or macaroons). The term "construction paper" was associated with the material in the early 20th century, although the general process for creating the paper began in the late 19th century when industrialized paper production and synthetic dye technology were combined. Around that time, construction paper was primarily advertised for classroom settings as an effective canvas for supporting multiple drawing mediums. The process for creating the paper involved a machine oriented process that exposed the paper to dyes while it was still pulp, resulting in a thorough distribution and brilliance of colour. The primary dyes involved in producing construction paper were abundant until Germany, the main producer of aniline for dyes at the time, became involved in World War I and ceased its exports. The shortage marked a period in which construction paper was created using substitute colouring methods. One of the defining features of construction paper is the radiance of its colours. Before the methodology behind construction paper's colouring was introduced, most paper was coloured by pigments and vegetable oil, which had weaker staining capabilities. Synthetic dyes were later developed, which provided a wider range of colours, stronger dyeing strength, and had lower costs. However, the colours given by synthetic dyes tend to fade over short periods of time, an effect often seen in construction paper, noted by greying colours and brittleness. - "Dictionary of Traded Goods and Commodities, 1550-1820". British History online. 2007. Retrieved 2007-11-24. - Driver, Dustin (1999-03-26). "South Park Studios: No Walk in the Park". Apple.com. Retrieved 2011-11-18. - http://cool.conservation-us.org/coolaic/sg/bpg/annual/v16/bp16-07.html Construction Paper: A Brief History of Impermanence |This material-related article is a stub. You can help Wikipedia by expanding it.|
Celebrate the 100th Day of School! Creative ideas recommended by teachers like you - Grades: PreK–K, 1–2, 3–5, 6–8 Language Arts Activities On the 100th day of school I read the book The Wolf's Chicken Stew by Keiko Kasza. After reading the book, I draw a picture of the fox and a stew pot and I have my class draw 100 items of food in the pot. –Shari Lawson, Hampton, VA, First grade In my first grade class we have a reading "Camp Out." Everyone brings sleeping bags, pillows, and blankets. We spend the day reading 100 books and munching on trail mix made from contributions of 100 small edible goodies. –Tammy Manley, Massena, NY, First grade We've been reading Emily's First 100 Days of School by Rosemary Wells. This book is great for primary grades and allows students to have a better understanding of time. We have covered interesting math topics such as skip-counting, counting by 5s and counting by 10s. We use a chart where we write the days every time we read from the book. This has allowed us to clearly see the progression of time and even predict how many more days until the 100th day of school. This is an ongoing project that we visit every afternoon before we start our math lesson. –Ms. Demas, Plainfield, NJ In my first grade class, we are going to write 100 words that we know, count objects to 100 in different ways, and write about what we would do with $100. –Sally, Tampa, FL, First grade Our first grade class of 26 children will be collectively reading 100 books and sharing our favorites. –Pat Salvatini, Rolling Meadows, IL, First grade My class will celebrate our 100th day of school this year by: - each bringing in a baggie of 100 of something - inviting 100 people to visit our classroom (we give a lollipop to each person) - reading 100 books by our special day - making a list of 100 words we can read and write to display in the hallway –Sandy Reiser, Fort Worth, TX, First grade Have students make their own books based on one of the following themes (or make up your own): - I wish I had 100 __. - What would you do with 100 brothers and sisters? - What would you do with 100 kisses? - What would you do with 100 pennies? - What would you do with 100 dollars? - What would you do with 100 slimy slugs? - You have been in school for 100 days. What has been your favorite thing? - What will you do when you are 100 years old? - What was it like to be a student your age 100 years ago? - What will it be like to be a student your age 100 years from now? - Things our moms and dads have told us 100 times! Or, try making a poem or story with exactly 100 words in it. Read 100 different stories of any genre (mysteries, fairy tales, etc.). As an extra challenge, do a different author for every 10 books! Ask your students to write down 100 words that they know how to spell.
Only about 5% of the universe consists of ordinary matter such as protons and electrons, with the rest being filled with mysterious substances known as dark matter and dark energy. So far, scientists have failed to detect these elusive materials, despite spending decades searching for them. But now, two new studies may be able to turn things around as they have narrowed down the search significantly. Dark matter was first proposed more than 70 years ago to explain why the force of gravity in galaxy clusters is so much stronger than expected. If the clusters contained only the stars and gas we observe, their gravity should be much weaker, leading scientists to assume there is some sort of matter hidden there that we can’t see. Such dark matter would provide additional mass to these large structures, increasing their gravitational pull. The main contender for the substance is a type of hypothetical particle known as a “weakly interacting massive particle” (WIMP). To probe the nature of dark matter, physicists look for evidence of its interactions beyond gravity. If the WIMP hypothesis is correct, dark matter particles could be detected through their scattering off atomic nuclei or electrons on Earth. In such “direct” detection experiments, a WIMP collision would cause these charged particles to recoil, producing light that we can observe. One of the main direct detection experiments in operation today is XENON100, which has just reported its latest results. The detector is located deep underground to reduce interference from cosmic rays, at the Gran Sasso laboratory in Italy. It consists of a 165kg container of liquid xenon, which is highly purified to minimise contamination. The detector material is surrounded by arrays of photomultiplier tubes (PMTs) to capture the light from potential WIMP interactions. The new XENON100 report has found no evidence of WIMPs scattering off electrons. Although this is a negative result, it rules out many so-called “leptophilic” models that predict frequent interactions between dark matter and electrons. But the most important consequence of the XENON100 analysis is with regards to the controversial claim of dark matter detection by researchers at the DAMA/LIBRA experiment in Italy, which is in conflict with the results from many other detectors such as the Cryogenic Dark Matter Search. Leptophilic dark matter was proposed as a viable explanation for this discrepancy since exclusions from other experiments would not directly apply. However, the new results from XENON100 firmly rule out this possibility. Meanwhile, dark energy explains our observation that the universe is expanding at an accelerating rate. Unlike normal matter, dark energy has a negative pressure, which allows gravity to be repulsive, driving the galaxies apart. One of the most promising dark energy candidates is a so-called “chameleon field”. In many dark energy models, we would expect to see significant effects on both laboratory and cosmological scales. However, the attractive feature of a chameleon field is that its impact depends on the environment. At small scales, such as on Earth, the density of matter is high and the field is effectively “screened out”, allowing chameleons to evade our detectors. However, in the vacuum of space, the matter density is tiny and the field can drive the cosmic acceleration. Until now, experiments have only used relatively large detectors, failing to observe chameleons as the density of matter is too high. However, it was recently proposed that an “atom interferometer”, operating on microscopic scales, could be used to search for chameleons. This consists of an ultra-high vacuum chamber containing individual atoms and simulates the low-density conditions of empty space so that screening is reduced. In the second report, researchers implement this idea for the first time. Their experiment works by dropping caesium atoms above an aluminium sphere. Using sensitive lasers, the researchers could then measure the forces on the atoms as they were in free fall. The results were perfectly consistent with only gravity and no chameleon-induced force. This implies that if chameleons exist, they must interact more weakly than we previous thought – narrowing the search for these particles by a thousand times compared to previous studies. The team are hoping that their innovative technique will help them to hunt down chameleons or other dark energy particles in a future experiment. Both of these studies demonstrate how laboratory experiments can answer fundamental questions about the nature of the cosmos. But most importantly, they raise hope that we will one day track down these tantalising substances that make up a whopping 95% of our universe.
Wisconsin Fast Plants are a patented variety of rapid-cycling Brassica rapa developed by Dr. Paul Williams at the University of Wisconsin-Madison as a research model for studies in plant disease. Fast Plants live their whole lives in 35 to 45 days; perfect timing for science classes as well as plant geneticists. School children have used the little mustard relative for studies in biology for over 20 years. Fast Plant seeds lie dormant until a combination of warmth, moisture and light activate hormones that trigger growth in the embryo. The growing plant, or cotyledon, breaks out of the moisture-weakened seed covering and sends a root growing down into the soil. Within one or two days, the Fast Plant cotyledon rises past the surface and opens into the first leaves of the plant which begin to photosynthesize light to produce food for the new plant. Growth and Development For the first 12 days of its life, the Fast Plant produces plant tissue; “true” leaves to gather the large amount of light that the adult plant needs, a stem up to 8 inches tall and branches. The plant’s size, shape of its leaves, branching habits and other characteristics define the plant as a Brassica rapa. At about two weeks of age, Fast Plants infloresce. They produce specialized leaves called sepals that provide shelters for flowers containing pistil and stamen--the sexual organs of an angiosperm. Fast Plants form flowers and bloom within only a few days. Fast Plants need temperatures below 80 degrees to produce viable flowers; hot temperatures produce sterile inflorescence. Once the flowers are open, the pollen must be transported from the anthers on one plant to the stigma on another--Fast Plants do not self-pollinate. Most plants depend on insects, but Fast Plants have the added benefit of hand pollination by human students wielding “bee sticks.” True to their fast life cycle, each Brassica rapa flower must be pollinated within a few days before it fades. Fertilization and Seed Development Stigma send pollen downward to the plants’ ovary where, if it reaches and fertilizes the egg successfully, a zygote forms the beginning of a new plant embryo. Fertilization must take place within 24 hours of pollination and within five days, the flower’s petals, designed to attract pollinators like bees, will fall away. The ovary forms a protective cover as a shell forms around each fertilized egg. Embryos are packed with food to carry them through their dormancy. The process, called embryogenesis takes approximately 20 days. Seed Distribution and Death Fast Plants are annuals, which means that the plant lives for one season and dies after it has set seed. As the plant dries, the plant ovary is pulled open by shrinking tissue, dropping ripe seeds on the ground and into the wind. Those that land where they have the right conditions will germinate and grow into new plants.
Lymphoma is a cancer (neoplasia) that affects lymph nodes and other organs containing lymphoid tissue. In domestic dogs, the term typically is used to refer to malignant multicentric lymphoma, also called lymphosarcoma, which is a progressive, multisystemic disease caused by overgrowth of certain cells in the bone marrow, thymus, lymph nodes, liver, spleen and/or other tissues. Multicentric lymphoma is the most common lymphoma in domestic dogs. However, localized forms of lymphoma can also occur in dogs, including lymphoma of the central nervous system (CNS lymphoma), chest (mediastinal lymphoma), skin (cutaneous lymphoma), mouth and gums (oral cavity lymphoma) and gastrointestinal tract (alimentary lymphoma; affects the stomach, small intestine, large intestine (colon) and/or rectum). Lymphoma can also localize to the eyes, kidneys, liver and bone. Causes of Canine Lymphoma The causes of canine lymphoma are not known. However, there appears to be a genetic component to this disease, because certain breeds are disproportionately affected. Most lymphomas probably occur secondary to some random genetic mutation or other abnormal chromosomal recombination event. Many authorities suggest that these genetic changes can be caused or exacerbated by chronic retroviral infection, immune system compromise or electromagnetic radiation. They also may be caused by exposure to environmental carcinogens such as household cleaners, agricultural chemicals, herbicides or second-hand smoke, although these theories have not yet been proven. There is no known way to prevent canine lymphoma. Canine lymphoma is common and can be a potentially fatal disease in domestic dogs. Fortunately, aggressive chemotherapy in combination with other protocols has proven successful in many cases in achieving remission, especially in cases of multicentric lymphoma, which is by far the most prevalent form of lymphoma in dogs. Unfortunately, lymphoma does tend to be progressive. For some reason, female dogs seem to respond better to treatment than do males, and small dogs seem to respond better than large dogs. Treatment is not recommended for female dogs during periods of pregnancy. Noticeable signs of lymphoma in dogs are typically nonspecific and highly variable, depending upon which form of lymphoma is involved (multicentric, central nervous system, cutaneous, gastrointestinal). Symptoms of Canine Lymphoma The symptoms of lymphoma usually commonly mimic the symptoms of many other diseases or disorders. Most owners of dogs with multicentric or disseminated lymphoma first find pronounced enlargement of the lymph nodes on the underside of their dog's neck, beneath and slightly behind the chin. These are the submandibular lymph nodes (the mandible is the lower jaw bone). Affected dogs normally do not seem painful when their submandibular lymph nodes are palpated and show no other unusual symptoms. Other signs that owners may notice include one or more of the following: Loss of appetite (inappetance; anorexia) Dark tarry stool (melena; digested blood showing up in the stool) Increased thirst and intake of water (polydypsia) Increased volume of urinate (polyuria) Difficulty breathing; shortness of breath (dyspnea) Skin nodules or masses (single or multiple) Bruised or ulcerated skin lesions Hair loss (alopecia; uncommon) Itchiness (pruritis; uncommon) Neurological signs: circling, disorientation, lack of coordination (ataxia), seizures, behavior changes, vision abnormalities Multicentric lymphoma - usually shows up first as painless but enlarged peripheral lymph nodes. Owners may see or feel these in areas under the jaw, in the armpits, in the groin area or behind the knees. Enlargement of the liver and/or spleen can also occur, causing the abdomen to distend. This is the most common form of lymphoid cancer in dogs. Gastrointestinal (alimentary) lymphoma - is a malignant form of cancer that can show up anywhere along the gastrointestinal tract (stomach, small intestine, large intestine, rectum). Clinical signs of gastrointestinal lymphoma include vomiting, diarrhea, weight loss, lethargy, depression, diarrhea and melena. Low serum albumin levels and elevated blood calcium levels commonly accompany alimentary lymphoma, although these can only be detected by veterinary evaluation of blood samples. This is the second most frequent form of lymphoma in dogs. Mediastinal lymphoma - where the cancer is localized to tissues in the chest cavity - can cause fluid to build up around the lungs. This can lead to coughing and labored breathing (dyspnea), mimicking the signs of congestive heart failure. Lymphoma of the skin (cutaneous lymphoma) - is uncommon in dogs. When it does occur, it usually shows up with hair loss (alopecia) and visible bumps on the skin. It can also be itchy (pruritic) and vary widely in appearance, ranging from a single lump to large areas of bruised, ulcerated and/or hairless skin. Lymphoma of the central nervous system (CNS) - is very uncommon in dogs. When lymphoma is localized in the CNS, dogs typically present with neurological signs such as circling, seizures, behavior changes and incoordination. Dogs at Increased Risk Lymphoma is most common in middle-aged to older dogs, although dogs of any age can be affected. There is no recognized gender predisposition for this disease. However, some breeds reportedly have an increased risk of developing lymphoma, including the Golden Retriever, Basset Hound, German Shepherd, Boxer, Scottish Terrier, Airedale Terrier, Bulldog, Poodle and Saint Bernard. A strong familial association has been established in some lines of Bull Mastiffs, Rottweilers and Otter Hounds. Pomeranians and Dachshunds reportedly have a decreased risk of developing lymphoma. The reasons for these breed differences in risk of lymphoma development are not well understood. There may be an association between canine lymphoma and exposure to certain environmental herbicides, household or agricultural chemicals, smoke and/or electromagnetic radiation, although the reason for the connection remains unclear. Dogs living in industrial areas where paints, solvents or other chemicals are common tend to have a higher incidence. Lymphoma typically causes very general clinical signs in domestic dogs, which can mimic symptoms of viral or bacterial infection and a number of other diseases. However, canine lymphoma is not particularly difficult to diagnose, as long as the dog's owner is able to proceed with and complete the diagnostic process. How Lymphoma is Diagnosed in Dogs The initial data base for a dog presenting with nonspecific symptoms of illness first involves a thorough physical examination and a complete history. Routine blood work and a urinalysis are also typically part of an initial work-up. The complete blood count may disclose anemia and elevated immature white blood cells, which are suggestive of lymphoma. The serum chemistry panel may show elevated blood calcium levels, abnormal liver enzyme levels and kidney abnormalities. The most reliable way to diagnose lymphoma in dogs is to take samples from enlarged lymph nodes and/or other affected organs. Usually, the attending veterinarian will recommend a fine needle aspirate (FNA) of one or more enlarged peripheral lymph nodes. This simple procedure involves inserting a small needle into the suspicious lymph nodes and withdrawing fluid and cells through an attached syringe. The sample is then expressed onto a glass slide and examined under a microscope to identify any cellular abnormalities. This process, called cytology, can in many cases be definitively diagnostic of lymphoma. If FNA and cytology are inconclusive, and often even if they point strongly to lymphoma, biopsies of one or more enlarged lymph nodes normally will be taken. A biopsy involves surgically removing pieces of tissue, rather than only sampling cells, from the questionable organs or areas. Biopsies typically require heavy sedation, and sometimes general anesthesia. Biopsy samples are then submitted to a pathology laboratory, where they are processed and evaluated for abnormalities by a technique called histopathology. Biopsy and histopathologic tissue evaluation are the gold standard for diagnosing canine lymphoma. Suspicious lymph nodes can also be entirely removed surgically, for more thorough histopathologic examination. Bone marrow aspiration or biopsy can be helpful in assessing whether lymphoma has spread widely throughout the dog's body. Chest and abdominal radiographs (X-rays), together with ultrasound examinations, are especially helpful in identifying abnormally enlarged lymph nodes, affected organs (especially liver and/or spleen) and isolated masses. A cerebrospinal fluid (CSF) tap can be useful if the dog is showing neurological signs. A genetic screening test for canine lymphoma has recently become commercially available and apparently is based on genetic markers that are identifiable from a blood sample. Other advanced diagnostic tests for lymphoma include immunocytochemistry, immunohistochemistry, flow cytometry and PCR (polymerase chain reaction) amplification of chromosome sequences. Even though lymphoma usually causes nonspecific symptoms, it is fairly easy for veterinarians to diagnose if there are no significant financial or other constraints on the part of the dog's owner. Chemotherapy is the go-to treatment for canine lymphoma. In most cases, the cancerous lymphatic cells are distributed throughout the dog's body, as are the chemotherapeutic drugs used to destroy them. The objective therapeutic goal is to achieve complete remission of the cancer. Subjectively, the goal of treatment is to restore the patient's pain-free quality of life for as long as possible. Chemotherapy protocols are complicated and rapidly evolving. A veterinary oncologist (cancer specialist) is the best person to discuss and advise owners about treatment options for canine lymphoma. A veterinarian normally will "stage" lymphoma to help the dog's owner decide on a treatment protocol. The stages of lymphoma in dogs basically are as follows: Stage I - only one lymph node (or lymphoid tissue in one organ) is involved. Stage II – multiple lymph nodes or a chain of lymph nodes in a localized area are involved. Stage III – Widespread, generalized lymph node involvement; most or all peripheral lymph nodes are affected. Stage IV - any or none of the above, plus liver and/or spleen involvement. Stage V - any or none of the above, with bone marrow involvement, blood involvement or involvement of any non-lymphoid organ. Each stage can further be classified into substage A (the dog has no observable symptoms of illness), or substage B (the dog is showing signs of illness, such as loss of appetite, lethargy, weight loss, or the like). Treatment Options for Lymphoma in Dogs Chemotherapy is defined as the treatment of illness or disease using chemical agents. Some chemotherapeutic medications can be given orally, while others must be given intravenously (IV) on an inpatient basis. Moreover, chemotherapeutic protocols may involve administration of only one drug or a combination of drugs. Multi-agent chemotherapy typically results in better remission rates and a more rewarding overall outcome than does single agent therapy. Chemotherapy targets rapidly-dividing cells. Treatment protocols are rapidly changing, and chemotherapeutic drugs can have severe and potentially fatal side effects. Current protocols for treating canine multicentric lymphoma can involve use of a number of different drugs, including cyclophosphamide, vincristine, prednisone, L-asparaginase and doxorubicin, among other. Other chemotherapy drugs, such as chlorambucil, lomustine, cytosine arabinoside and mitoxantrone, are sometimes used in the treatment of lymphoma as well, either singly or in addition to other drugs. It is extremely important to closely monitor white blood cell counts and remission status throughout the course of chemotherapy. While used much less commonly than chemotherapy, radiation therapy is being explored to treat lymphoma in combination with drug treatment. Early research suggests that certain chemotherapeutic protocols, used in combination with radiation, improve remission rates and extend the disease-free interval in dogs with multicentric malignant lymphoma. Dogs with lymphoma isolated to a single or several lymph nodes, and those with focal gastrointestinal lymphoma, may be able to be treated successfully by surgical removal of the lymph node or mass followed by chemotherapy and/or radiation. Chemotherapy with or without radiation treatment has improved survival times in dogs suffering from mediastinal lymphoma. Cutaneous lymphoma can be treated with single or multi-agent chemotherapy, although the disease tends to become refractory to treatment and the results are much less rewarding. Stem cell transplantation is commonly used to treat people with lymphoma and is another possible treatment option for dogs. In fact, much of the basic research on stem cell transplantation was generated from dogs. When cost is a restricting factor, steroid drugs such as prednisone can be prescribed to help alleviate the symptoms of lymphoma, although this will not significantly affect the survival rate. Prednisone may also cause the cancer to become resistant to other chemotherapeutic agents and typically is only used if more aggressive treatment is not an option. Some cancers do not respond particularly well to chemotherapy. Fortunately, canine lymphoma – especially the common multicentric form – usually does. Radiation treatment is available for some types of cancer as well and is being used increasingly in conjunction with chemotherapy, with the hope of improving remission rates. During any chemotherapy treatments, the patient will need frequent blood tests and monitoring to be sure that the treatment is not adversely affecting his or her organs or overall health. Of course, in pets and in people, there can be a number of unpleasant side effects from chemotherapy and/or radiation treatment, including severe gastrointestinal upset, allergic reactions and hair loss, among others. Current treatment protocols can help to extend a dog's life after a diagnosis of lymphoma. Dogs with lower stage lymphoma have a better prognosis than those with higher stage disease. Unfortunately, lymphoma is almost always progressive and, ultimately, fatal. Treatment of lymphoma rarely cures the disease. Instead, it hopefully will make the patient feel a bit better, and live a bit longer, with a significantly improved quality of life if remission is achieved.
Principle of Operation A current transformer is defined as as an instrument transformer in which the secondary current is substantially proportional to the primary current (under normal conditions of operation) and differs in phase from it by an angle which is approximately zero for an appropriate direction of the connections. This highlights the accuracy requirement of the current transformer but also important is the isolating function, which means no matter what the system voltage the secondary circuit need to be insulated only for a low voltage. The current transformer works on the principle of variable flux. In the ideal current transformer, secondary current would be exactly equal (when multiplied by the turns ratio) and opposite to the primary current. But, as in the voltage transformer, some of the primary current or the primary ampere-turns are utilized for magnetizing the core, thus leaving less than the actual primary ampere turns to be transformed into the secondary ampere-turns. This naturally introduces an error in the transformation. The error is classified into current ratio error and the phase error. Typical terms used for specifying current transformer are: Rated primary current The value of current which is to be transformed to a lower value. In CT parallence, the load of the CT refers to the primary current. Rated secondary current The current in the secondary circuit and on which the performance of the CT is based. Typical values of secondary current are 1 A or 5 A. The apparent power of the secondary circuit in Volt-amperes expressed at the rated secondary current and at a specific power factor. The RMS value of the difference between the instantaneous primary current and the instantaneous secondary current multiplied by the turns ratio, under steady state conditions. Accuracy limit factor The value of primary current up to which the CT compiles with composite error requirements. This is typically 5, 10 or 15, which means that the composite error of the CT has to be within specified limits at 5, 10 or 15 times the rated primary current. Short time rating The value of primary current (in kA) that the CT should be able to withstand both thermally and dynamically without damage to the windings with the secondary circuit being short-circuited. The time specified is usually 1 or 3 seconds. Class PS/ X CT In balance systems of protection, CT s with a high degree of similarity in their characteristics are required. These requirements are met by Class PS (X) CT s. Their performance is defined in terms of a knee-point voltage (KPV), the magnetizing current (Image) at the knee point voltage or 1/2 or 1/4 the knee-point voltage, and the resistance of the CT secondary winding corrected to 75C. Accuracy is defined in terms of the turns ratio. Knee point voltage The point on the magnetizing curve where an increase of 10% in the flux density (voltage) causes an increase of 50% in the magnetizing force (current). When the currents in a number of feeders need not be individually metered but summated to a single meter or instrument, a summation current transformer can be used.The summation CT consists of two or more primary windings which are connected to the feeders to be summated, and a single secondary winding, which feeds a current proportional to the summated primary current. A typical ratio would be 5+5+5/ 5A, which means that three primary feeders of 5 are to be summated to a single 5A meter. Core balance CT (CBCT) The CBCT, also known as a zero sequence CT, is used for earth leakage and earth fault protection. The concept is similar to the RVT. In the CBCT, the three core cable or three single cores of a three phase system pass through the inner diameter of the CT. When the system is fault free, no current flows in the secondary of the CBCT. When there is an earth fault, the residual current (zero phase sequence current) of the system flows through the secondary of the CBCT and this operates the relay. In order to design the CBCT, the inner diameter of the CT, the relay type, the relay setting and the primary operating current need to be furnished. Interposing CT’s (ICT’s) Interposing CT’s are used when the ratio of transformation is very high. It is also used to correct for phase displacement for differential protection of transformer.
Pn junction is one of the most important structures in today’s semiconductor technology used in transistors, FET and many types of integrated circuits. In our previous articles, we have seen p-n junction formation from a p-type and n-type semiconductor and Semiconductor electronics. We have also learned about diffusion current, drift current and depletion region. Here we are going to explain how p-n junction diode characteristics. P-n junction diode: To understand the behavior of p-n junction we make it conducting by applying an external voltage over a range of 0v, 5v, 10v and determine how the current passed through the p-n junction varies with increasing voltage levels. We usually connect two metallic contacts at the ends of the p-n junction to apply external voltage. A p-n junction with two metallic contacts is known as p-n junction diode or semiconductor diode. As we know diode is a specialized electronic component with two electrodes called the anode and cathode conducts current in one direction. As shown in the figure above, the direction of the arrow indicates the direction of conventional current. The process of applying an external voltage to the p-n junction is known as biasing. We generally bias p-n junction in following ways: - Forward bias: In forward bias negative terminal is connected to n-type material and the positive terminal is connected to p-type material across the diode shows the decrease in the built-in potential barrier. - Reverse bias: In reverse bias negative terminal is connected to p-type material and the positive terminal is connected to n-type material across the diode shows the increase in the built-in potential barrier. - Zero bias: In zero bias no external voltage is applied. Pn junction diode biasing: In the modern electronics p-n junction possesses special properties that are useful in many applications. We can bias p-n junction in following ways by applying an external voltage or not. Three possible biasing conditions and two operating regions for the typical p-n junction diode are explained below. (i) Zero bias diode: In zero bias or thermal equilibrium state, junction potential provides higher potential energy to the holes on the p-side than the n-side. When we short the terminals of the junction diode, few majority carriers in the p-side with plenty energy starts travel across the depletion region. With the help of majority charge carriers, the current start flowing in the diode known as forward current. Similarly, minority charge carriers in the n-side move across the depletion region in reverse direction give rise to a reverse current. Due to this, the potential barrier opposes the movement of electrons and holes across the junction and permits the minority carriers to drift across the p-n junction. Therefore, the potential barrier helps minority charge carriers in p-type and n-type to drift across the p-n junction. After this when the majority charge carriers are equal and both are moving in a reverse direction, the equilibrium will be established indicates the zero current flowing in the circuit. That’s why this junction is said to be in a state of dynamic equilibrium. (ii) Forward biased diode: Forward biasing a p-n junction diode is very simple, into this, we connect its positive terminal to the p-side and negative terminal to the n-side of the of the p-n junction diode. When an external voltage is applied, the majority charge carriers in N and P regions are attracted towards the PN junction and the width of the depletion layer decreases with the diffusion of the majority charge carriers. So an electric field induced in the direction converse to that of the incorporated field. When the forward bias is greater than the built-in potential, the depletion region becomes very much thinner so that a large number of majority charge carriers can cross the PN junction and conducts an electric current. The current flowing up to built-in potential is known as KNEE current. (iii) Reverse bias diode: In reverse bias condition, we connect positive terminal to the n-side and negative terminal to the p-side of the of the p-n junction diode. When an external voltage is applied, positive terminal attracts the electrons away from the junction in N side and negative terminal attracts the holes away from the junction in P side which results in the increase in the width of the potential barrier. With the increase in the potential barrier width, the electric field at the junction also starts increasing and the p-n junction act as a resistor. Minority charge carriers generated at the depletion region cause the small leakage current in the junction diode. This indicates that the increase in the width of the depletion layer presents a high impedance path which acts as an insulator. When the reverse bias potential across the p-n junction diode increases, the reverse breakdown voltage occurs causing the diode current to be controlled by an external circuit. If we increase reverse bias further, p-n junction diode becomes short circuited due to overheat in the circuit and maximum circuit current flows in the PN junction diode. Pn junction diode characteristics: P-n junction diode shows zero resistance in the forward direction and infinite resistance in the reverse direction. i.e., it is not a perfect diode. Before using this diode, it is necessary to know a little about its characteristics and properties with forward bias and reverse bias. To know about its characteristics we plot a graph between voltage and current along the x-axis and y-axis which shows the behavior of p-n junction diode in forward biasing and in reverse biasing. (i) Forward characteristic for a junction diode: As shown in the figure above, the VI characteristics of junction diode are not linear i.e., not a straight line. This nonlinear characteristic indicates that the resistance is not constant during the operation of N junction. When forward biased is applied to the diode then due to low impedance path, a large amount of current starts flowing known as infinite current. This current starts to flow above the knee point with a small amount of external potential. If we increase further current then it may damage the diode, to overcome this we use load resistor which controls the flow of current and safe the device from damaging. (ii) Reverse characteristic for a junction diode: In this type of biasing the current is low till breakdown is reached and hence the diode likes an open circuit. The characteristic curve of this diode is shown in the fourth quadrant of the given figure above. When the input voltage of the reverse bias has reached the breakdown voltage, reverse current increases enormously. In reverse direction, a perfect diode would not allow any current to flow. Pn junction diode Equation: The pn junction diode equation for an ideal diode is given below: I = IS[exp(eV/KBT) – 1] IS = reverse saturation current e = charge of electron KB = Boltzmann constant T = temperature For a normal p-n junction diode, the equation becomes I = IS[exp(eV/ɳKBT) – 1] Here, ɳ = emission co-efficient, which is a number between 1 and 2, which typically increases as the current increases. Hope you all like this article. For suggestions please comment below. We always appreciate your suggestions.
Melt ponds develop over Arctic sea ice during the melting season from the accumulation of melt water from ice and snow. These have become increasingly important over the last few decades because they have been more prevalent and absorb much more solar energy due to their dark colour compared to the highly reflective white sea ice (Perovich et al., 2002). Where ponds form, the ice beneath becomes thinner due to increased melting. Towards the end of the summer, the air temperature drops and a thin layer of ice forms over melt ponds. The ponds’ melt water trapped in the ice acts as a heat store and does not allow the underlying ice to start thickening until all the pond’s water is frozen. Ponds are up to 1.5 m deep and it can take over two months to freeze their volume of water. Considering that ponds cover up to 50% of the sea ice extent their impact cannot be neglected (Flocco et al., 2015). Credit: Donald Perovich A strong negative correlation exists between the change in successive mean winter ice thicknesses and the length of the intervening melt season, suggesting that summer melt processes play a dominant role in determining mean Arctic sea ice thickness for the following winter (Laxon et al., 2003). Another indication of the importance of melt ponds in explaining thinning of sea ice is that melt ponds are present in the Arctic more than in the Antarctic, where the sea ice thinning is less striking. Ponds are rather irregular in shape but occur at a higher percentage over thin young ice: since the area of young ice is increasing (relatively to the total amount of ice which is instead decreasing), the impact of melt ponds will also become increasingly important. This will lead to a positive feedback effect in which thin ice will start thickening later in winter and will possibly be a preferential area for the formation of melt ponds in the following spring. Furthermore, corresponding to where melt ponds form, specular lenses of fresh water form under the sea ice cover, impacting the freezing point of water at the ice–ocean interface. At the beginning of the season sea ice is impermeable, so once ponds form they can be above sea level. When they start melting the ice, it becomes more permeable and when the ponds are fully developed they are in hydrostatic balance with the ocean so they drain to sea level. Schemes handling melt ponds have only recently been included in global circulation models and are rather crude: the melt water was assumed to be flushed into the ocean without dwelling on the sea ice. Recent studies have shown that the lack of a melt pond parameterization can give an overestimation of sea ice thickness of up to 40% during summer (Flocco et al 2010, 2012). Model results have shown a good ability to forecast the minimum September ice extent, relating it to the melt pond area calculated by the model in May (Schröder et al 2014). This is one demonstration of how we have used the principles of physics to understand the changes we have observed in the cryosphere. Flocco, D., D. L. Feltham, and A. K. Turner, 2010. Incorporation of a physically based melt pond scheme into the sea ice component of a climate model. J. Geophys. Res., 115, C08012, doi:10.1029/2009JC005568. Flocco, D., D. Schröder, D. L. Feltham, and E. C. Hunke, 2012. Impact of melt ponds on Arctic sea ice simulations from 1990 to 2007. J. Geophys. Res., doi:10.1029/2012JC008195. Flocco, D., D. L. Feltham, E. Bailey, and D. Schröder, 2015. The refreezing of melt ponds on Arctic sea ice. J. Geophys. Res. Oceans, 120, 647–659 Laxon, S., N. Peacock and D. Smith, 2003. High interannual variability of sea-ice thickness in the Arctic region. Nature, (425) October 30, 947-950. Perovich, D.K., W.B. Tucker III, and K.A. Ligett, 2002. Aerial observations of the evolution of ice surface conditions during summer, J. Geophys. Res., 107 (C10), 8048, doi:10.1029/2000JC000449. Schröder D., D. L. Feltham, D. Flocco, M. Tsamados, 2014. September Arctic sea-ice minimum predicted by spring melt-pond fraction. Nature Clim. Change, DOI: 10.1038/NCLIMATE2203.
Saturn’s majestic rings are the remnants of a long-vanished moon that was stripped of its icy outer layer before its rocky heart plunged into the planet, a new theory proposes. The icy fragments would have encircled the solar system’s second largest planet as rings and eventually spalled off small moons of their own that are still there today, says Robin Canup, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. “Not only do you end up with the current ring, but you can also explain the inner ice-rich moons that haven’t been explained before,” she says. Canup’s paper appears online December 12 in Nature. The origin of Saturn’s rings, a favorite of backyard astronomers, has baffled professional scientists. Earlier ideas about how the rings formed have fallen into two categories: Either a small moon plunged intact into the planet and shattered, or a comet smacked into a moon, shredding the moon to bits. The problem is that both scenarios would produce an equal mix of rock and ice in Saturn’s rings — not the nearly 95 percent ice seen today. Canup studied what happened in the period just after Saturn (and the solar system’s other planets) coalesced from a primordial disk of gas and dust 4.5 billion years ago. In previous work, she had shown that moon after moon would be born around the infant gas giants, each growing until the planet’s gravitational tug pulled it in to its destruction. Moons would have stopped forming when the disk of gas and dust was all used up. In the new study, Canup calculated that a moon the size of Titan — Saturn’s largest at some 5,000 kilometers across — would begin to separate into layers as it migrated inward. Saturn’s tidal pull would cause much of the moon’s ice to melt and then refreeze as an outer mantle. As the moon spiraled into the planet, Canup’s calculations show, the icy layer would be stripped off to form the rings. A moon so large would have produced rings several orders of magnitude more massive than today’s, Canup says. That, in turn, would have provided a source of ice for new, small moons spawned from the rings’ outer edge. Such a process, she says, could explain why Saturn’s inner moons are icy, out to and including the 1,000-kilometer-wide Tethys, while moons farther from the planet contain more rock. “Once you hear it, it’s a pretty simple idea,” says Canup. “But no one was thinking of making a ring a lot more massive than the current ring, or losing a satellite like Titan. That was the conceptual break.” “It’s a big deal,” agrees Luke Dones, also of the Southwest Research Institute, who has worked on the comet-makes-rings theory. “It never occurred to me that the rings could be so much more massive than they are now.” Another recent study supports the notion that todays rings are the remnants of massive ancient rings of pure ice. In a paper in press at Icarus, Larry Esposito, a planetary scientist at the University of Colorado at Boulder, calculates that more massive rings are less likely to be polluted by dust, and hence could still be as pristine as they appear today even after 4.5 billion years. Some questions still linger about Canup’s model, says Dones, like why some of Saturn’s inner icy moons have more rock in them than others. The theory will be put to the test in 2017, when NASA’s Cassini mission finishes its grand tour of Saturn by making the best measurements yet of the mass of the rings. Researchers can use those and other details to better tease out how the rings evolved over time.
Psoriasis is a common skin disorder, affecting approximately 2% of the world’s population. Skin cells on the affected areas of sufferers multiply up to 10 times faster than normal; as they pile up on the surface, they cause raised, silver-scaled patches on a red base. The reason for the rapid cell growth is unknown, but outbreaks are triggered by the immune system. Psoriasis can also affect the nails of sufferers, as well as their joints, causing arthritis. Psoriasis occurs in different forms - Plaquepsoriasis is the most common form. Thick patches of psoriasis involve the scalp, elbows, lower back and knees in particular. - In guttate psoriasis, small drop-like, scaly areas appear on the torso, limbs and scalp. Guttate psoriasis is often triggered by infections like tonsillitis. - In pustular psoriasis, large and small blisters of non-infectious pus (pustules) form on the palms of the hands and soles of the feet, and sometimes over the entire body. - Inflexural or inverse psoriasis the skin folds like those in the groin, navel, armpits and under the breasts are involved. - Very rarely, psoriasis covers the entire body and produces exfoliative erythrodermic psoriasis, where the entire skin becomes inflamed. This form of psoriasis is serious because, like a burn, it keeps the skin from serving as a protective barrier against injury and infection. The patient loses heat and can go into cardiac failure, as it can cause a "high output" state. These patients may require hospitalisation. - About 7% of people with psoriasis also have joint inflammation that produces symptoms of arthritis. This condition is called psoriatic arthritis. Recent research indicates that psoriasis is a disorder of the immune system. A type of white blood cell, called a T cell, helps protect the body against infection and disease. It seems that abnormalities in the so-called T helper cells and the way that they interact with skin cells are associated with psoriasis. It is what precipitates the change that is unknown. Although not contagious, psoriasis tends to run in families. It is undoubtedly a complex genetic disease. People from European descent are particularly susceptible, especially those with a blood relative who suffers from the disorder. Age of onset is either early (16-22years) or late (57-60 years), and men and women are equally affected. An episode of psoriasis may result from a number of factors. Emotional stress is one – many patients suffering a flare-up, report a recent emotional stressor, such as a new job or the death of a loved one. Severe sunburn, obesity and certain drugs can aggravate psoriasis. Commonly implicated drugs include the anti-malaria medication chloroquine, lithium, beta-blockers like propranolol and metoprolol, medication taken to treat high blood pressure like ACE-inhibitors, anti-inflammatory drugs and almost any medicated ointment or cream. Streptococcal infections (especially in children), and injured skin (bruises and scratches) can also stimulate the formation of new plaques. Alcohol consumption and smoking clearly make psoriasis worse. Psoriasis usually starts as one or more small psoriatic plaques – dark-pink, raised patches of skin with overlying silvery flaky scales – usually on the scalp, knees, elbows, back and buttocks. Sometimes the eyebrows, armpits, navel and groin may also be affected. Usually, psoriasis produces only flaking. Even itching is uncommon. On the scalp, flaking may be mistaken for severe dandruff, but the patchy nature of psoriasis, with flaking areas interspersed among completely normal ones, distinguishes the disease from dandruff. Although the first plaques may clear up by themselves, others may soon follow. Some plaques may remain thumbnail-sized, but in severe cases, psoriasis may spread to cover large areas of the body. When flaking areas heal, the skin may look completely normal and hair growth is unchanged. However, healing psoriasis may leave behind skin changes, particularly pigment changes. Most people with limited psoriasis suffer few problems beyond the flaking, although the skin’s appearance may be embarrassing. Psoriasis can also involve fingernails and toenails, causing pitting, discolouring and thickening, and sometimes even separating them from underlying tissue. Patients may also suffer from arthritis. When to see a doctor - If you suspect that you have psoriasis, you should see your doctor for prescription of appropriate treatment, and to be screened for arthritis. - If you have psoriasis that flares up or is not responding to treatment. - If you have psoriasis and develop symptoms of arthritis. Psoriasis may be misdiagnosed at first because many other disorders can produce similar plaques and flaking. As psoriasis develops, the characteristic scaling pattern is usually easy for doctors to recognise, so diagnostic tests usually aren’t needed. However, to confirm a diagnosis, a doctor may perform a skin biopsy (removal of a skin specimen and examination under a microscope). This is not usually necessary. Although psoriasis may be stressful and embarrassing, most outbreaks are relatively benign – early treatment of the plaques will help prevent symptoms becoming more severe, and plaques generally disappear within weeks. Psoriasis is treated according to the severity of the disease and its responsiveness to initial treatments, including: - Topical treatment - Excimer laser - Systemic treatment The first stage of treatment is topical (medicines are applied to the skin). When a person has only a few small plaques, psoriasis generally responds quickly. - Applying an emollient once or twice a day helps your skin retain moisture. - Some doctors recommend salicylic acid ointment, which smoothes the skin by promoting the shedding of psoriatic scales. - Ointments containing corticosteroids are effective, and can be made more effective if the area is wrapped in cellophane after applying them (only do this if advised to by your doctor). However, because they can have harmful side effects, you should be careful not to overuse them. This may thin the skin and lose its efficacy. - Coal-tar ointments and shampoos can alleviate symptoms, but many psoriasis patients seem vulnerable to the side effects – in particular folliculitis, a pimple-like rash affecting the hair follicles. - Calcipotriol, is a synthetic form of Vitamin D3 (this is not the same as Vitamin D supplements). It controls the excessive production of skin cells, and can help those who can’t tolerate some of the other creams. It works best in conjunction with phototherapy. - Anthralin (Dithranol) therapy is usually reserved for severe forms of psoriasis. If not properly applied, anthralin can irritate healthy skin and leave stains that can last several weeks. It is therefore not commonly used anymore. - Tazarotene (a new topical Vitamin A derivative or retinoid), is very useful for plaque and scalp psoriasis. It is applied at night. It may be an irritant and the concurrent use of emollients is recommended. - Tacrolimus can be used, especially for psoriasis of the face and skin folds. Topical therapies are often used in combination with each other, or other treatment modalities. Exposure to ultraviolet light, for example during the summer months, may help exposed regions of affected skin clear up spontaneously. Sunbathing can help to clear up the plaques on larger areas of the body (although this is not recommended due to the risk of developing sun-related skin cancers). For persistent, difficult-to-treat cases of psoriasis, ultraviolet (UV) light therapy may be prescribed; and is often extremely successful. - UVB phototherapy is used to treat widespread psoriasis and lesions that resist topical treatment. A light panel or light box is used, either at the doctor’s surgery or at home. Sometimes it is combined with topical treatments. - PUVA treatment (UVA phototherapy with application or ingestion of substances called psoralens) can be used. Psoralen makes the skin extra sensitive to the effects of ultraviolet light. There is also a risk of UV-related skin cancers developing after treatment with UV light. It seems that PUVA presents a higher risk. This modality can be used to treat individual plaques of psoriasis. It can be very expensive. For more severe forms of psoriasis, a doctor may prescribe internal medications. This is not a decision to be undertaken lightly, as most of these drugs can have severe side effects and require regular blood tests and monitoring. - Methotrexate: Used to treat some forms of cancer, this drug interferes with the growth and multiplication of skin cells and suppresses the immune system. It can be effective in extreme cases but may cause liver damage or decrease the production of oxygen-carrying red cells, infection-fighting white blood cells and clot-enhancing platelets. - Acitretin: This is a derivative of vitamin A. It has many side effects, the most concerning of which is that it causes birth defects if taken during pregnancy. In fact, pregnancy should be avoided for at least 2 years after completing treatment with this drug. - Ciclosporin: This is a drug used to suppress the immune system in patients who have had kidney transplants. It has many side effects, and interacts with many other drugs. These drugs are a relatively new and exciting development in the treatment of psoriasis. They represent more targeted therapy than the traditional systemic medications. This group includes adalimumab, etanercept, infliximab and usteokinumab. Some result in an increased risk of developing infections. As these are relatively new drugs, we cannot be certain of their long-term effects. People with psoriasis should try to avoid triggers like alcohol, smoking and stress. Patients with psoriasis are also thought to be more prone to suffering from conditions like strokes and heart attacks, so control of risk factors like blood pressure, diabetes and cholesterol is especially important. Previously reviewed by Dr Leonore R.J. van Rensburg, MBChB (UCT), M. Med. Dermatology (US) Updated by Dr B. Tod, MBBCh (Wits), Dermatology registrar, October 2011
Write a reflection paper (avoiding personal pronouns), describing § how you understand or make sense of the topic § the topic characteristics § applications of the topic concepts, knowledge, and skills · Include how you anticipate the effects of the topic on you as a future teacher I decided to write about decision making in the classroom. No two days in a classroom are alike, even if one were to teach the same grade for multiple years in the same room. A teacher is awarded a teaching certificate and even with all the required classes and extra training NOTHING can prepare a teacher with what comes up on a day to day basis. For that reason, with the use of state standards, lesson plans, parent and staff meetings and other reasonable preparation, solutions can be more streamline. Issues that can be more complicated and uncomfortable are with the involvement of other people. For example, when a child gets hurt, does a quick consoling satisfy the issue? Does the teacher fill out a document? Will a document simply be kept in her file or handed in? If it is handed in, does it reflect poorly on the teacher, in terms of lack of responsibility? Was the responsibility really that of someone else? Was the injury on the playground, when another teacher was on duty and failed to get involved? If so, how are the politics affected? While one might argue that asking questions is not necessarily addressing THE QUESTION, in actuality the questions are the answers. Classroom rules should be developed that address every conceivable issue that will arise. It is impossible to KNOW ALL THAT ... Posting of rules and anticipating all possible things that could happen in the classroom are discussed.
Dark Ocean Carbon Absorption Not Enough To Restrict Global Warming April Flowers for redOrbit.com – Your Universe Online A new study led by the University of Iowa shows that although microbes that live below 600 feet where light doesn’t penetrate – the so called “dark ocean”– might not absorb enough carbon to curtail global warming, they do absorb considerable amounts of carbon, meriting further study. The findings of this study were published in the International Society of Microbial Ecology Journal. While many people are familiar with the concept of trees and grass absorbing carbon from the air, Tim Mattes, associate professor of civil and environmental engineering at the University of Iowa, said that bacteria and ancient single-celled organisms called “archaea” in the dark ocean hold between 300 million and 1.3 billion tons of carbon. “A significant amount of carbon fixation occurs in the dark ocean,” says Mattes. “What might make this surprising is that carbon fixation is typically linked to organisms using sunlight as the energy source.” Dark ocean organisms might not require sunlight to lock up carbon, but they do require an energy source. “In the dark ocean, carbon fixation can occur with reduced chemical energy sources such as sulfur, methane, and ferrous iron,” Mattes says. “The hotspots are hydrothermal vents that generate plumes rich in chemical energy sources that stimulate the growth of microorganisms forming the foundation for deep sea ecosystems.” Mattes and his colleagues studied hydrothermal vents in a volcanic caldera at Axial Seamount, an active volcano approximately 5000 feet underwater in the Pacific Ocean and about 300 miles west of Cannon Beach, Oregon. During a 2011 cruise, sponsored by the National Science Foundation (NSF), Mattes’ colleague Robert Morris gathered data and collected samples used in the study. “Using protein-based techniques, we observed that sulfur-oxidizing microorganisms were numerically dominant in this particular hydrothermal vent plume and also converting carbon dioxide to biomass, as suggested by the title of our paper: ‘Sulfur oxidizers dominate carbon fixation at a biogeochemical hot spot in the dark ocean,’” said Mattes, who conducted the research at the University of Washington School of Oceanography while on leave from UI. Because carbon fixation occurs on such a large scale in the dark ocean, scientists question the contribution of such activity to offsetting carbon emissions that contribute to global warming. The research team says that such speculation needs further study. “While it is true that these microbes are incorporating carbon dioxide into their cells in the deep ocean and thus having an impact on the global carbon cycle, there is no evidence to suggest that they could play any role in mitigating global warming,” he says. The primary value of the investigation, according to Mattes, is to better understand how microorganisms function in the dark ocean and to increase fundamental knowledge of global biogeochemical cycles.
For years, the world has run on fossil fuels, with humankind using these ancient non-renewable resources to power cars, airplanes, machinery, trains, home heating systems, and more. Modern technology is working toward developing renewable energy methods that would replace these fossil fuels as the main source of gas and power for the world, largely because fossil fuels have become a scarce and wildly expensive property. Worst of all, since fossil fuels were derived hundreds of millions of years ago by a long and laborious geological process – transforming from plants into a spongy composite called peat, which in turn morphed into the fuels we use today, they are completely un-renewable. If the science of this interests you, be sure you check out Udemy’s introductory earth science course. There is no hope of more fossil fuels being created, or of scientists figuring out how to synthesize petroleum or coal in a lab through experimentation of plant matter. Because of this, mankind is in a tight spot as far as energy is concerned. Still, while scientists strive to create new alternative energy possibilities (or to increase the efficiency of existing ones, such as solar power), fossil fuels remain the most robust and effective means we have of powering vehicles and other machinery in reliable fashion. To understand precisely why that is, we need to look at the different types of fossil fuels and examine why they have long made for such effective sources of energy. The Different Fossil Fuels Though different terms are occasionally used that are interchangeable for each, there are three primary varieties for fossil fuels. These are coal, oil, and natural gas. Of the three types of fossil fuels, coal is the only one still in a solid state. It appears as chunks of midnight black rock, which are harvested from the Earth by workers in mining operations. Coal is composed of five different elements: carbon, nitrogen, oxygen, hydrogen, and sulfur, with the distributions of those five elements varying depending on the piece of coal. In fact, because of these differing elemental make-ups, there are actually three different types of coal, each with different energy properties. The highest in energy content is anthracite coal, which is harder and has a higher distribution of carbon than the other varieties. The other two types of coal – lignite and bituminous – aren’t quite as energy-rich, but still have their uses. Lignite is high in oxygen and hydrogen instead of carbon, while bituminous occupies a sort of happy medium between the two extremes. Coal is a dynamic fossil fuel in terms of how it is used. Depending on the breakdown of coal uses that you look at, you might see the top uses listed as a generic “electricity generation” or “electrical utilities,” or you might see things broken down a bit into smaller categories. In any case, coal today is used for everything from producing steel and cement to keeping the lights on in homes and businesses. Oil, also called petroleum, is arguably the most often discussed form of fossil fuel in the world today, with every conversation about vehicular fuel economy and “arm and a leg” gas prices relating back to the near-universal value of this ancient fossil fuel. Just how ancient are most of our world’s petroleum reserves? Over 300 million years, according to scientific consensus. Of those 300 million years, civilizations have been making use of oil for about five or six millennia. From the Sumerians (who used oil to invent asphalt) to the Native Americans (who used it for treating wounds and waterproofing canoes), oil has a long history of maximizing efficiency and convenience for human civilization. Today, we think of oil as the fuel that we pump into our cars at gas stations, but refined gasoline is not what comes out of the ground at oil wells. On the contrary, crude oil is the type of petroleum that occurs naturally. In the United States, we get our crude oil from multiple sources, though unfortunately, most of it is not mined domestically and must be purchased from the Middle East. Due to this fact, recent wars fought by the United States in that part of the world have caused many Americans to question current society’s reliance on fossil fuels and to urge a greater movement toward alternative energy development. Once crude oil arrives in the United States, it is taken to refineries, where it is processed into fuel that we can actually use. From any given gallon of crude oil, these refineries produce a range of different oil substances that are then used for different applications. A little less than half of the average barrel of oil is refined into gasoline, which is indeed the type of petroleum that we use to fuel our cars. However, other parts of the barrel are refined into oil for asphalt, jet fuel, kerosene, lubricants, and more. These different categories showcase how widespread the use of oil truly is. - Natural Gas The final variety of fossil fuel is natural gas. In an introductory chemistry course such as this one offered by Udemy, you will learn about the fact that each different type of fossil fuel occurs in a different chemical state, as well as why this occurs. Where coal is a solid and oil is a liquid, natural gas is, of course, a gas. It is made up primarily of methane and is incredibly lightweight (as well as incredibly flammable). In the United States, natural gas is used primarily to heat homes, power air conditioning systems, and fuel stoves and other cooking appliances. Usually, when a mining operation locates a petroleum reserve, it will also have found a source of natural gas. These two types of fossil fuels simply tend to occur close to one another underground, making mining and harvesting the two resources thankfully efficient once they are found. Unlike oil, though, which is pumped from the ground by massive oil rigs, natural gas is channeled into pipeline. These pipelines take the natural gas to storage facilities, eventually making its way to your home to meet a portion of your energy needs. When we use natural gas for cooking, we often notice a distinctive smell that we associate with the gas. Interestingly, natural gas is odorless when it is mined from beneath the Earth’s surface, with the smell being added later as a means of alerting people to leaks of the substance. Why Knowing about Fossil Fuels is Important So why is it of pivotal importance to understand how the different types of fossil fuels originated, how they are harvested, and how they are used? For one thing, it is important simply because these substances still play such a pivotal role in our lives. They give us the means to get around, to keep our homes warm, to cook our meals, and more. Given the state of the movement toward alternative energy, it is likely that fossil fuels will continue to play these key roles for years or even decades still to come. In short, their importance isn’t dimming just yet. For another thing, though, learning about the incredible geological processes that brought about fossil fuels in the first place shows just how precious these fuels are, and reminds us of why preserving them – whether through “green” lifestyle choices, fuel economy cars, or the use of alternative energy – is an important thing to do. Companies that do building projects can even work toward greater levels of sustainability consciousness by pursuing Green LEED certification (and taking this Udemy course to help them get there). Ultimately, the important thing to remember is that fossil fuels, while amazing sources of energy, won’t last forever, so making plans for the future is something that we need to do now.
Chapter 4 Rocks: Mineral Mixtures Standard S6E5.b Investigate the composition of rocks in terms of minerals EQ: How are rocks and minerals different? Section 1 pp.90, 95, 96 • Rock • Naturally occurring solid mixture of one or more minerals and organic (living) matter • What is the Rock Cycle? • Past uses for Rocks • Used to make… • The series of processes in which a rock forms, changes from one type to another, is destroyed, and forms again by geological processes. • Include hammers to make other tools • Make ancient and modern buildings & monuments • What is weathering? • Why is it important? • What is Erosion? • What is deposition? • The process in which water, wind, ice & heat break down rocks • It breaks down rock into fragments. This is the sediment from which SEDIMENTARY rocks are made. • Erosion is the process by which wind, water, ice, or gravity transports soil & sediment from 1 location to another. • Deposition is the process in which material is laid down. – Sediments may be pressed & cemented EQ: What is Uplift? • What is uplift? • What happens when uplift reaches the Earth’s surface? • It is movement within the Earth that causes rocks inside the Earth to be moved to the Earth’s surface • Weathering, erosion, and deposition begin EQ: How are rocks classified? • 3 Classes of Rocks 1. Igneous 2. Sedimentary 3. Metamorphic Rocks are classified by: • Composition • The chemical makeup of the rock –the minerals and other materials • Texture • The size, shape and positions of the rock grains • Provides clues as to how and where the rock is formed Warmup • Write a paragraph that: • Compares and contrasts minerals and rocks. EQ: Where do igneous rocks come from? Section 2: pg. 98-101 • Igneous Rocks • Igneous means “fire” • Begins as magma that contains many minerals • Cooled magma hardens and solidifies (becomes a solid) Igneous Rock Composition • Composition of Igneous Rocks • Determined by minerals • Light colored ones – less dense– made of aluminum, silicon-Felsic Rocks • Dark colored ones – more dense, made of iron, calcium, & magnesiumMafic Rocks Igneous Rock Texture • Texture of Igneous Rocks • Size of the grains • Fast cooling lava on the surface of the volcano -fine grains or no grains Ex: pumice • Slow cooling magma inside the Earth -large grains Ex: granite Igneous Rock Formation • Igneous Rock Formation • Intrusive igneous rock – forms inside Earth – cools slowly – Many are named for their size & shape – Large (coarse) grains • Ex: granite • Igneous Rock Formation • Extrusive Igneous Rock – – forms on Earth’s surface – Common around volcanoes – cools fast – fine grains or no grains Ex: pumice • Igneous Rock Formation • In other words, the faster the magma or lava cools the smaller the grains of the rock • The slower the magma or lava cools the larger the grains of the rock Summary • Compare and contrast Stone Mountain’s granite and pumice from a volcano. Draw a Picture of the Formation of Igneous Rocks • Label intrusive, extrusive, magma, lava. • Show the grain size of the developing rocks. • Indicate how fast the rocks cool. • Name rock samples for each class of igneous rocks. EQ: What are sedimentary rocks made of? Section 3 pg. 102 – 105 • Sediments • Fragments of weathered rock & minerals • Strata • Layers of sedimentary rock on Earth’s surface that forms when the sediment is deposited & cemented together by dissolved clacite & quartz • Stratification • Process in which sedimentary rocks are arranged in layers • What do they record? • Motion of wind & water waves on oceans, rivers, and sand dunes EQ: What are the 3 classes of sedimentary rock? • Clastic • Made of rock fragments cemented together by dissolved calcite & quartz • May be any grain size • Examples: conglomerate, shale, sandstone • Chemical • From solutions of dissolved minerals in water • The dissolved minerals crystallize • Ex: halite—salt— NaCl Result of supersaturated salt water • Organic • Made from the remains of dead organisms • Ex: Chalk is made of tiny sea creatures • Ex: Coal forms underground when decomposed plant material is changed into coal by heat & pressure Fossil fuels are nonrenewable resources Summary • Describe the formation of the 3 classes of sedimentary rock. • Draw a picture of how each sedimentary rock forms • Show the rock “before” it became a sedimentary rock and the “after” or the resulting sedimentary rock. • Label each class of sedimentary rock. • Compare and Contrast Igneous and Sedimentary Rocks EQ: What is metamorphism? Section 4 pg. 106 – 111 • Metamorphism • Change shape • Metamorphic Rock • The structure, texture or composition of the rock changes because of extreme heat and/or pressure • A chemical change occurs • Deformation • Change in the shape of a rock caused by a force, like squeezing or stretching EQ: What are the 2 classes of metamorphic rock? • Foliated • Mineral grains are arranged in bands • Ex: mica, slate • Non-foliated • Random arrangement of grains • Commonly made of 1 or a few minerals • Ex: marble Summary • If I needed to make a tool from a rock, should I choose a foliated or a non-foliated metamorphic rock? Explain your answer.
to the radio during a thunderstorm, you’re sure to have noticed interruptions or static at one time or another. Perhaps you heard the voice of a pilot rattling off data to a control tower when you were listening to your favorite FM station. This is an example of interference that is affecting a receiver’s performance. Annoying as this may be while you’re trying to listen to music, noise and interference can be hazardous in the world of HF communications, where a mission’s success or failure depends on hearing and understanding the transmitted message. Receiver noise and interference come from both external and internal sources. External noise levels greatly exceed internal receiver noise over much of the HF band. Signal quality is indi-cated by signal-to-noise ratio (SNR), measured in decibels (dB). The higher the SNR, the better the signal quality. Interference may be inadvertent, as in the case of the pilot’s call to the tower. Or, it may be a deliberate attempt on the part of an adversary to disrupt an operator’s ability to communicate. Engineers use various techniques to combat noise and inter-ference, including: (1) boosting the effective radiated power, (2) providing a means for optimizing operating frequency, (3) choosing a suitable modulation scheme, (4) selecting the appropriate antenna system, and (5) designing receivers that reject interfering signals. Let’s look at some of the more common causes of noise and interference. Natural Sources of Noise is the main atmospheric (natural) source of noise. Atmospheric noise is highest during the summer and greatest at night, especially in the l- to 5-MHz range. Average values of atmospheric noise, as functions of time of day and season, have been determined for locations around the world, and are used in predicting HF radio system performance. Another natural noise source is galactic or cosmic noise, generated in space. It is uniformly distributed over the HF spectrum, but does not affect performance below 20 MHz. Power lines, computer equipment, and industrial and office machinery produce man-made noise, which can reach a receiver through radiation or by conduction through power cables. This type of man-made noise is called electromagnetic interference (EMI) and it is highest in urban areas. Grounding and shielding of the radio equipment and filtering of AC power input lines are techniques used by engineers to suppress EMI. At any given time, thousands of HF transmitters compete for space on the radio spectrum in a relatively narrow range of frequencies, causing interference with one another. Interference is most severe at night in the lower bands at frequencies close to the MUF. The HF radio spectrum is especially congested in Europe due to the density of the population. A major source of unintentional interference is the collocation of transmitters, receivers, and antennas. It’s a problem on ships, for instance, where space limitations dictate that several radio systems be located together. For more than 30 years, Harris RF Communications has designed and implemented high-quality integrated shipboard communications systems that eliminate problems caused by collocation. Ways to reduce collocation inter-ference include carefully orienting antennas, using receivers that won’t overload on strong, undesired signals, and using transmit-ters that are designed to minimize intermodulation. interference, or jamming, results from transmitting on operating frequencies with the intent to disrupt communica-tions. Jamming can be directed at a single channel or be wide-band. It may be continuous (constant transmitting) or look-through (transmitting only when the signal to be jammed is present). Modern military radio systems use spread-spectrum techniques to overcome jamming and reduce the probability of detection or interception. Spread-spectrum techniques are tech-niques in which the modulated information is transmitted in a bandwidth considerably greater than the frequency content of the original information. We’ll look at these techniques in Chapter 7. Signals from a transmitter reach the receiver via multiple paths (Figure 4-1). This causes fading, a variation in average signal level because these signals may add or subtract from each other in a random way. • Natural (atmospheric) and man-made sources cause noise and interference. Lightning strikes are the primary cause of atmo-spheric noise; power lines, computer terminals, and industrial machinery are the primary cause of man-made noise. • Congestion of HF transmitters competing for limited radio spectrum in a relatively narrow range of frequencies causes interference. It is generally worse at night in lower frequency bands. • Collocated transmitters interfere with each other, as well as with nearby receivers. • Jamming, or deliberate interference, results from transmitting on operating frequencies with the intent to disrupt communi-cations. interference causes signal fading.
Although there is considerable uncertainty about the physical changes and response of the various freshwater and marine species, it is possible to suggest how certain species may respond to projected climate changes over the next 50-100 years. The uncertainties highlight the importance of research to separate the impacts of changing climate from natural population fluctuations and fishing effects. Many commercial finfish populations already are under pressure (e.g., overexploited), and global change may be of minor concern compared with the impacts of ongoing and future commercial fishing and human use or impacts on the coastal zone. Further, changes in the variability of climate may have more serious consequences on the abundance and distribution of fisheries than changes in mean conditions alone (Katz and Brown, 1992), and changes in future climate variability are poorly understood at this time. Fish, including shellfish, respond directly to climate fluctuations, as well as to changes in their biological environment (predators, prey, species interactions, disease) and fishing pressures. Although this multiforcing sometimes makes it difficult to establish unequivocal linkages between changes in the physical environment and the responses of fish or shellfish stocks, some effects are clear (see reviews by Cushing and Dickson, 1976; Bakun et al., 1982; Cushing, 1982; Sheppard et al., 1984; Sissenwine, 1984; and Sharp, 1987). These effects include changes in the growth and reproduction of individual fish, as well as the distribution and abundance of fish populations. In terms of abundance, the influence occurs principally through effects on recruitment (how many young survive long enough to potentially enter the fishery) but in some cases may be related to direct mortality of adult fish. Fish carrying capacity in aquatic ecosystems is a function of the biology of a particular species and its interrelationship with its environment and associated species. Specific factors that regulate the carrying capacity are poorly known for virtually all species, but some general statements can be made with some confidence. Fish are affected by their environment through four main processes (Sheppard et al., 1984): Fish are influenced not only by temperature and salinity conditions but also by mixing and transport processes (e.g., mixing can affect primary production by promoting nutrient replenishment of the surface layers; it also can influence the encounter rate between larvae and prey organisms). Ichthyoplankton (fish eggs and larvae) can be dispersed by the currents, which may carry them into or away from areas of good food production, or into or out of optimal temperature or salinity conditions-and perhaps, ultimately determine whether they are lost to the original population. Climate is only one of several factors that regulate fish abundance. Managers attempt to model abundance trends in relation to fishing effects in order to sustain fisheries. In theory, a successful model could account for global warming impacts along with other impacts without understanding them. For many species of fish, the natural mortality rate is an inverse function of age: Longer-lived fish will be affected by natural changes differently than shorter-lived fish. If the atmosphere-freshwater-ocean regime is stable for a particular time, it is possible to estimate the age-specific mortality rates for a species of interest. However, at least some parts of the atmosphere-freshwater-ocean system are prone to oscillations on a decadal scale, which may not be cyclical. These natural changes occur globally; thus, they will have impacts on the freshwater and marine ecosystems that support North American fish populations. Under natural conditions, it may be expected that the different life histories of these fish will result in different times of adjustment to a new set of environmental conditions. Any effects of climate change on fisheries are expected to be most pronounced in sectors that already are characterized by full utilization, large overcapacities of harvesting and processing, and sharp conflicts among users and competing uses of aquatic ecosystems. Climate change impacts, including changes in natural climate variability on seasonal to interannual time scales, are likely to exacerbate existing stresses on fish stocks. The effectiveness of actions to reduce the decline of fisheries depends on our ability to distinguish among these stresses and other causes of change and on our ability to effectively deal with those over which we have control or for which we have adaptation options. This ability is insufficient at present; although the effects of environmental variability are increasingly recognized, the contribution of climate change to such variability is not yet clear. Recreational fishing is a highly valued activity that could incur losses in some regions as a result of climate-induced changes in fisheries. Recreational fishing is a highly valued activity within North America. In the United States, for example, 45 million anglers participate annually; they contribute to the economy through spending on fishing and related activities (US$24 billion in 1991). The net economic effect of changes in recreational fishing opportunities as a result of climate-induced changes in fisheries is dependent on whether projected gains in cool- and warm-water fisheries offset losses in cold-water fisheries. Work by Stefan et al. (1993) suggests mixed results for the United States, ranging from annual losses of US$85-320 million to benefits of about US$80 million under a number of GCM projections. A sensitivity analysis (U.S. EPA, 1995) was conducted to test the assumption of costless transitions across these fisheries. This analysis assumed that best-use cold-water fishery losses caused by thermal changes were effectively lost recreational services. Under this assumption, all scenarios resulted in damages, with losses of US$619-1,129 million annually. Other reports in this collection
Scientists from Imperial College London have made a huge medical leap by converting embryonic stem cells – the precursors to every cell in the body – into lung cells. This marks the first step towards growing human lungs for transplantation. The research, which will be published in the journal Tissue Engineering, took human embryonic stem cells and encouraged them to convert into the cells needed for gas exchange in the lung, ie. where oxygen is absorbed and carbon dioxide is excreted. The investigations have taken place using stem cells derived from embryos, but the scientists say the next step is to test the system using stem cells from other (perhaps less controversial) sources – including umbilical cord blood and bone marrow. Dame Professor Julia Polak, from ICL, who led the research team, says: “This is a very exciting development, and could be a huge step towards being able to build human lungs for transplantation or to repair lungs severely damaged by incurable diseases such as cancer.” While the end plan of building lungs for transplantation is several years off, the researchers say they hope to use their findings after initial lab testing to treat problems such as acute respiratory distress syndrome, which currently kills many intensive care patients. By injecting stem cells that will become lung cells, they hope to be able to repopulate the lung lining in these patients. The team will commercialise their findings through the Imperial College spin out company NovaThera.
July 1969 Radio-Electronics [Table of Contents] Wax nostalgic about and learn from the history of early electronics. See articles from Radio-Electronics, published 1930-1988. All copyrights hereby acknowledged. Part 1 of this "All About IC's" series titled, "What Makes Them Tick," author Bob Hibberd introduced the concept of semiconductor physics and doped PN junctions. In Part 2, he discusses methods used to fabricate monolithic, integrated circuits (IC's) on silicon chips. Transistors, diodes, resistor, capacitors, and to some extent, inductors, can be built using a combination of variously doped junction regions, metallization, and oxidation (insulators). Technology has come a long way since 1969, including mask techniques, 3-D structures, doping gradients, feature size, dielectric breakdown strength, current leakage, circuit density, mixed analog, RF, and digital circuitry, and other things. Part 3, covered in the August issue, goes into more detail about how passive components are realized in silicon. See Part 1, How to squeeze diodes, transistors, capacitors, resistors into thousandths of an inch - Part 2 by Bob Hibberd Texas Instruments, Dallas, Texas The transistor revolutionized electronics. It made possible smaller, lighter, more versatile, more reliable, less costly electronic gear which required less operating power. But the transistor was only a prelude to a much greater revolution - the monolithic integrated circuit. These devices perform complete circuit functions in a space the size of a single transistor. As a result IC's (integrated circuits) are becoming the basic component of electronic equipment. They are rapidly replacing assemblies of discrete transistors, diodes, resistors and capacitors. In this article we will see how silicon monolithic integrated circuits are made. The method of forming the electronic circuit elements within a single silicon wafer and interconnecting them to give a complete electronic circuit is detailed. Fig. 1 - A 1.5-inch silicon slice can contain 500 IC wafers. Within a 50-mil wafer, resistors and transistors need the most area. Typical component sizes Fig. 2 - To make an IC, greatly enlarged drawings are photographically reduced to about 50-mil square. Separate masks are needed for each oxide removal step. The terms "microelectronics" and "integrated circuits" are sometimes used interchangeably, but this is not correct. Microelectronics is a name for extremely small electronic components and circuit assemblies, made by thin-film, thick-film or semiconductor techniques. An integrated circuit (LC) is a special kind of microelectronics. It is a circuit that has been fabricated as an inseparable assembly of electronic elements in a single structure. It cannot be divided without destroying its intended electronic function. Thus, IC's come under the general category of microelectronics, but all microelectronic units are not necessarily IC's. There are two basic approaches to modern microelectronics-monolithic integrated circuits and film circuits. In monolithic integrated circuits, all circuit elements, active and passive, are simultaneously formed in a single small wafer of silicon. The elements are interconnected by metallic stripes deposited onto the oxidized surface of the silicon Film circuits are made by forming the passive electronic components and metallic interconnections on the surface of an insulation substrate. Then the active semiconductor devices are added usually in discrete wafer form. There are two types of film circuits, thin film and thick film. In thin-film circuits the passive components and interconnection wiring are formed on glass or ceramic substrates, using evaporation techniques. The active components (transistors and diodes) are fabricated as separate semiconductor wafers and assembled into the circuit. Thick-film circuits are prepared in a similar manner except that the passive components and wiring pattern are formed by silk-screen techniques on ceramic substrates. Other integrated circuits are produced using a combination of techniques. In multichip circuits, the electronic components for a circuit are formed in two or more silicon wafers (chips). The chips are mounted side by side on a common header. Some interconnections are included on each chip and the circuit is completed by wiring the chips together with small-diameter gold wire. Hybrid integrated circuits are combinations of monolithic and film techniques. Active components are formed in a wafer of silicon using the planar process, and the passive components and interconnection wiring pattern formed on the surface of the silicon oxide, which covers the wafer, using evaporation techniques. The monolithic IC is often considered a single electronic component since it is made and installed as single entity. The circuit components, as they were called in discrete assemblies, are referred to as circuit elements of integrated circuits. From now on we use the word "element" for this purpose. Monolithic IC technology is an extension of the diffused planar process. Active elements (transistors and diodes) and passive elements (resistors and capacitors) are formed in the silicon slice by diffusing impurities into selected regions to modify electrical characteristics, and where necessary to form pn junctions. The various elements are designed so all can be formed simultaneously by the same sequence of diffusions. In practice, the details of the diffusion processes are decided by the requirements of the transistors. The geometry of the other elements is designed so desired values are obtained with the transistor diffusion schedules. All process operations are carried out on the top surface of the silicon slice and all element contact regions are formed on this same surface. They are interconnected to form the complete electronic circuit by evaporating a metallic wiring pattern atop the silicon oxide which covers the surface between the contact areas. As with planar transistors, selective oxide removal, diffusion and metallization are carried out on whole silicon slices. On each slice, the same circuit pattern is repeated a large number of times. For example, with an IC wafer 50 mils square (1 mil equals 0.001 inch) a single slice of silicon. 1.5 inches in diameter, contains about 500 circuits. which are all processed at the same time (see Fig. 1). Fig. 3 - PN junction formation as oxidized n-type layer (a) has areas opened (b), and p-type diffusion isolates n areas. Surface is re-oxidized (d). Fig. 4 - Oxide isolation starts with etched channels between surface-oxidized elements (a & b); silicon deposit (c) is then lapped on inverted slice (d). The general sequence of monolithic integrated circuit fabrication is shown in Fig. 2. The first step is the "breadboard" design of the electronic circuit using discrete components. The circuit is designed to perform the required function and to insure that the values of the circuit elements are compatible with the diffusion processes. Next, the circuit elements are designed dimensionally and the complete circuit laid out in a geometric pattern. This is usually done by drawing the layout about 500 times full size - a 50-mil square wafer is drawn about 2 feet square. From this drawing, a series of related drawings are prepared, one for each of the oxide removal steps. Each drawing is reduced to actual size by a series of photographic processes. At the same time as the final reduction to life size, the pattern is repeated by indexing the photographic plate under the image in a "step and repeat" sequence. For each oxide removal step, a "master" photographic mask is made. It contains a matrix of the circuit patterns in precise location over an area greater than the slices to be processed. Copies are made from the master, and are used to expose the photoresist selectively during the oxide removal steps. To understand IC fabrication, we must know how each type of circuit element is formed. The elements used are the same as those in discrete circuits - transistors, diodes, resistors and capacitors. A requirement common to all elements in IC's is that each one must be electrically isolated from the main part of the silicon wafer so that unwanted coupling between elements is minimized. Then the only connections between the elements will be the metallized pattern on the surface. Several isolation methods have been developed. The most common are diode isolation and oxide isolation. Diode isolation uses the very high resistance of a reverse-biased pn junction. In this process (Fig. 3), an n-type epitaxial layer is grown on a p-type substrate slice of silicon. The surface of the epitaxial layer is oxidized (Fig. 3-a) and the oxide selectively removed from everywhere but the regions in which the elements will be formed (Fig. 3-b). A p-type diffusion is then carried out and the p-type regions formed extend down through the epitaxial layer and join up with the p-type substrate (Fig. 3-c). This leaves n-type regions, each separated from the substrate by a pn junction (Fig. 3-d). When the final IC is operated, the pn junctions are all biased in the reverse direction by connecting the p-type substrate to a potential more negative than any part of the circuit. Then each junction presents a very high resistance which isolates the element formed in the n-type region of the junction. With oxide isolation, a layer of silicon oxide is formed around each element as in Fig. 4. On a slice of n-type single crystal silicon (Fig. 4-a) channels are etched in the surface between the locations planned for each element. Then the surface of the slice, including the channels, is oxidized to form a continuous layer of silicon oxide Fig. 4-b. Polycrystalline silicon is deposited on top of the oxide in an epitaxial reactor (Fig. 4-c). Finally the slice is inverted and the original silicon is lapped down so only the regions between the channels are left (Fig. 4-d). Each of these is a region of single crystal silicon isolated by the layer of silicon oxide and supported on the substrate of polycrystalline silicon. A third system of isolation used for special applications is called beam lead isolation. The circuit elements are formed in a wafer of silicon in the regular manner. The interconnecting metallization is made thicker than usual. Then the silicon between each element is completely removed by etching from the back side. The etchant does not attack the metallization, so each element is completely separate and is supported from the top by the metallic connections. A thermo-setting plastic can be applied to fill the spaces between the elements for added mechanical support. Forming an IC Transistor The techniques for making bipolar transistors for integrated circuits are similar to those for discrete planar transistors. A typical arrangement using diode isolation is shown in Fig. 5-a. After the isolation process, boron is diffused in to form the p-type base region. Then phosphorus is diffused in to form the high-concentration n+-type emitter region. At the same time, another n+ region is diffused into the n-type collector region so a low-resistance contact to the collector region can be made. There is one significant difference from the discrete planar transistor. The collector contact is made at the top surface, alongside the base and emitter contacts. This is a problem because collector current must flow laterally along the narrow n-type collector region to reach the contact. There is additional series-collector resistance compared with the discrete transistor, in which the collector contact is made to the bottom surface. Fig. 5 - An IC transistor, showing basic structure (a), preferred structure (b) and the device's equivalent circuit (c). Fig 6-a - A collector-base IC diode used in general-purpose circuits. b The faster-switching emitter-base diode. To minimize this series resistance, a low-resistance n+-type region is selectively diffused into the substrate slice before the epitaxial growth of the n-type layer. This gives the structure in Fig. 5-b. Collector current can now flow straight down into the low-resistance n+ region and then sideways along it to the vicinity of the contact, resulting in a lower series-collector resistance (RCS). This arrangement is called D.U.F. (Diffusion Under the Epitaxial Film). The equivalent circuit of the transistor, including the isolation junction, is shown in Fig. 5-c. The isolation junction has capacitance C1 in parallel and series resistance R1 due to the resistance of the substrate between the active transistor region and the substrate contact. At high frequencies and fast switching speeds, the effect of the isolation diode capacitance must be carefully evaluated, as it may be high enough to allow some stray coupling to the substrate and other elements of the circuit. Because of its construction, the MOS (Metal Oxide Semiconductor) transistor is self-isolating. Both source and drain are isolated by their own pn junctions. The gate is isolated by the thin layer of silicon oxide. The channel formed under the gate is also isolated by a pn junction which forms with it. This means that MOS transistors can be fabricated in a smaller area than bipolar transistors, allowing a higher element density. The MOS transistor can be used as a resistor between source and drain. Its value is dependent on the gate potential and the transconductance of the structure. Resistors with values compatible with switching circuits can be obtained by designing the MOS structure to have a low transconductance (wide source to drain spacing) and connecting the gate to the drain, so that the structure is biased on. Such resistors can be made in a much smaller area than that required for diffused resistors, allowing a further increase in element density. One disadvantage is that the MOS circuit has a considerably slower switching speed than the bipolar circuit. Integrated Circuit Diodes Integrated circuit diodes are prepared by forming pn junctions at the same time as one of the transistor junctions. A diode in which the cathode is the original n-type region and the p-type anode is formed during the transistor base diffusion is in Fig. 6-a. This diode has the same reverse-voltage capability as the transistor collector junction, and is widely used for general-purpose circuit applications. Where fast switching speeds are required, emitter-base diodes are used (Fig. 6-b). The diode anode is formed at the same time as the transistor base, and the cathode with the emitter. This gives a low-voltage diode with fast response time. To avoid unwanted effects caused by transistor action this type diode is arranged so the anode contact shorts the p-type anode region to the n-type region in which the diode is formed. In the August article of this series, we'll look at how other elements are formed in IC's. The upcoming section describes how resistor values are determined by p-type material dimensions and concentration. Also covered are Junction and MOS-type capacitors, IC testing and assembly processes. Posted September 13, 2018
How the heart works The heart is the most crucial and hard working organ of a human body. Placed behind the chest bone in the rib cage; the average human heart measures up to the size of a closed fist. The basic purpose of this organ is to pump blood to the entire body. This muscular organ pumps blood through the blood vessels of the circulatory system. Blood is responsible for providing the body with oxygen and nutrients along with the task of removing metabolic wastes. It supports the existence of all other organs, making it fundamental to living. For understanding the functioning of the heart its structural details is a prerequisite. It has four chambers – 2 atrium and 2 ventricles along with valves. The atrium is the upper chamber that receives blood. The two lower ventricles are the pumping chambers. Numerous blood vessels called the arteries and veins are connected to the heart. There are four valves that control the flow of blood from atrium to the ventricles and from ventricles to the two main arteries. The four valves are tricuspid, pulmonary, mitral and aortic. It’s important to understand the position and task of the valves. The tricuspid valve is placed in between the right atrium and the right ventricle. The mitral valve is positioned between the left atrium and the left ventricle. Basically, the Blood flows out of the right ventricle to the lungs through the pulmonary valve and then blood flows out of the left ventricle to your body through the aortic valve. The basic purpose of the heart is to pump oxygen rich blood to different organs, receive oxygen pure blood and again pump the oxygen rich blood. This cycle is continuous till the last breath of a person. Let’s have a look at this cycle in detail. - The main veins that bring impure (oxygen pure) blood from all parts of the body are superior vena cava (SVC) and inferior vena cava(IVC). - The impure blood enters the right atrium (RA), the upper right chamber. - From RV, the blood flows through the tricuspid valve(TV) to the right ventricle(RV), the lower right chamber of the heart. - The right ventricle then pumps the pure (oxygen rich) blood through the pulmonary valve into the pulmonary artery. - The blood then flows to the left and right pulmonary arteries and finally to the lungs. - The process of blood purification happens in the lungs. In the process of breathing, oxygen is put and carbon dioxide is removed from the blood. The blood becomes oxygen rich here in the lungs. - The pure blood goes back to the heart in the left atrium through the pulmonary veins. - This blood further flows through the mitral valve into the left ventricle which is the lower left chamber of the heart - Then the left ventricle pumps the pure blood through the aortic valve into the aorta. Aorta is the main artery that carries oxygen rich blood to all parts of the body.
The British government established the Imperial War Graves Commission in 1917 to care for the overseas graves of the Empire’s war dead. The new organization developed out of the British Army’s Graves Registration Commission, established in 1915, and in 1960 was renamed the Commonwealth War Graves Commission (CWGC). Burying the Fallen The British Empire chose to bury its battlefield dead from the First World War near the sites where they had fallen, and not to repatriate remains to their home countries, as many grieving families and politicians had demanded. While thousands of bodies had been buried in makeshift graves during the fighting, military units, assisted first by the Red Cross and later by official grave registrars, had made efforts to note temporary sites for future reburials. After battles, special grave detachments attempted to collect the unburied dead for proper burial, and to disinter remains from temporary graves for proper reburial elsewhere. After the Armistice, this process began in earnest with the vastly expanded Imperial War Graves Commission moving remains into newly established imperial military cemeteries. The process involved tens of thousands of burials and took many years. It still continues on a smaller scale as agricultural or construction work across old battlefields regularly uncovers additional human remains. The War Graves Commission The Commission imposed a sense of social equality in its cemeteries and made no rank distinctions in the physical construction of grave markers. Each simple white headstone carries the name, rank, and unit symbol of the deceased, and a religious symbol if the soldier’s religion was known. The unknown dead carry an inscription chosen by British author Rudyard Kipling, who lost a son during the war: “A Soldier of the Great War – Known Unto God.” No other personalized adornments were allowed other than the opportunity for next of kin to pay for a short motto to appear at the bottom of headstone. Today, the CWGC cares for the graves of 1.7 million members of Commonwealth forces who died in the two world wars and in subsequent conflicts around the globe. It maintains 2,500 cemeteries in over 170 countries. The Commission consists of six member countries – the United Kingdom, Canada, Australia, New Zealand, India, and South Africa. Member states jointly fund the Commission’s operations, but the United Kingdom pays more than 75 percent of the costs.
The Han dynasty is oftentimes regarded as one of the most successful of all the Chinese dynasties. The practices and traditions during this dynasty helped set the tone for the imperial rule that governed China for over 2000 years. Like all the other dynasties, religion played a great role in shaping the rule of those in power. Just like the arts and the technologies developed during that era, the influences of the Han Dynasty religions spread long after the dynasty has ended. Several forms of religion dominated China during the Han period. Ancestor worship has been widespread in China long before the beginning of the Han dynasty. However, ancestor worship was still in practice during the Han dynasty. The emperor worshipped his ancestors through costly burials, and families all throughout China made ritual sacrifices not only to the deities and spirits but also to their ancestors. The emperor was also expected to revere Heaven and Earth, the Great Unity and the deities and spirits of the seasons. It was a tradition for the emperor to climb Mount Tai to give offerings to Heaven and Earth. Taoism however is considered to be the main Han dynasty religion. It was also founded during the Han dynasty. The Chinese people held Taoist ceremonies for worship and religious purposes. Taoism can be characterized by the belief for opposites, such as, “there would be no love without hate.” Buddhism also became a major religion in China during the Han dynasty after its arrival at around 1st century CE. It was believed to have been brought by travelers who took the Silk Road from North India. Confucianism, on the other hand, was more of a philosophy rather than a Han dynasty religion but it also ruled China for almost 2000 years. It was during the Han dynasty that China first embraced Confucianism. Despite it not being a religion, it became one of the most important ideological beliefs during that era. There were still many primitive religions and beliefs which were observed by some of them minorities during that time. What is undeniable though is that each of them helped shape the Han dynasty to become the successful dynasty that it was.
Early signs of branching evolutionary trees or phylogenetic trees are paleontological charts. This kind of chart was illustrated in Edward Hitchcock's book called the Elementary Geology, which showed the geological relationships between that of plants and animals. However, going way back in time, the whole idea of tree life first started from the ancient notions of a ladder-like progression from the lower to the higher forms of life. An example of a ladder-like progression would be that of the Great Chain of Being. In addition, a well-known man named Charles Darwin from the 1850s produced one of the first drawings of evolutionary tree in his seminal book called "The Origin of Species". Basically in this book, he showed the importance of evolutionary trees. After many years later, many evolutionary biologists studied the forms of life through the use of tree diagrams to depict evolution. The reason for this is that these types of diagrams prove to be very effective at explaining the concept of how speciation happens through the random splitting of lineages and the idea of adaptive. Overall, for many centuries, many biologists used the tool evolutionary trees as a way to study the idea of life. Evolutionary trees, or Phylogeny, is the formal study of organisms and their evolutionary history with respect to each other. Phylogenetic trees are most commonly used to depict the relationships that exist between species. In particular, they clarify whether certain traits are homologous (found in the common ancestor as a result of divergent evolution) or homoplasy (or sometimes referred to as analogous, a character that is not found in a common ancestor but whose function developed independently in two or more organisms, known as convergent evolution). Evolution Trees are diagrams that show various biological species and their evolutionary relationships. They consist of branches that flow from lower forms of life to the higher forms of life. Evolutionary trees differ from taxonomy. Whereas taxonomy is an ordered division of organisms into categories based on a set of characteristics used to assess similarities and differences, evolutionary trees involve biological classification and use morphology to show relationships. Phylogeny is shown by evolutionary history using the relationships found by comparing polymeric molecules such as RNA, DNA, or protein of various organisms. The evolutionary pathway is analyzed by the sequence similarity of these polymeric molecules. This is based on the assumption that the similarities of sequence result from having fewer evolutionary divergence than others. The evolutionary tree is constructed by aligning the sequences. The length of the branch is proportional to the amount of amino acid differences between the sequences. Phylogenetic systematics informs the construction of phylogenetic trees based on shared characters. Comparing nucleic acids or other molecules to infer relationships is a valuable tool for tracing an organism's evolutionary history. The ability of molecular trees to encompass both short and long periods of the time is hinges on the ability of genes to evolve at different rates, even in the same evolutionary lineage. For example, the DNA that codes for rRNA changes relatively slowly, so comparisons of DNA sequences in these genes are useful for investigating relationships between taxa that diverged a long time ago. Interestingly, 99% of the genes in humans and mice are detectably orthologous, and 50% of our genes are orthologous with those of yeast. The hemoglobin B genes in humans and in mice are orthologous. These genes serve similar functions, but their sequences have diverged since the time that humans and mice had a common ancestor. Evolutionary pathways relating the members of a family of proteins may be deduced by examination of sequence similarity. This approach is based on the notion that sequences that are more similar to one another have had less evolutionary time to diverge than have sequences that are less similar. Evolutionary trees are used today for DNA hybridization. They are used to determine the percentage difference of genetic material between two similar species. If there is a high resemblance of DNA between the two species, then the species are closely related. If only a small percentage is identical, then they are distant in relationship. Construction of a Evolutionary TreeEdit Each point at which a line in a evolutionary tree branches off is known as a node. A node is a common ancestor to the species that come off that branch. Relationships between species in an evolutionary tree include monophyletic, paraphyletic, and polyphyletic. A monophyletic group is a branch of species which contain a common ancestor and all descendants. Paraphyletic groups consist of a common ancestor but not all its descendants. Polyphyletic groups consist of organisms that do not have a (recent) common ancestor and are usually compared to study similar characters among relatively unrelated organisms. These nodes are calculated by using a computational phylogenetic program that calculates the genetic distance from multiple sequence alignments. However, there are limitations, primarily not being able to account the actual evolutionary history. While the sequence alignment shows comparatively how related two species are, there is no indication as to how they evolved. Therefore for these the origins of the three domains came from the same ancestor and then branches out to the two distinct groups Eukarya and Prokarya. However, the Archaea branches out of the Eukarya domain, even though they are single-celled. Evidence for Phylogeny construction: 1. By looking at the fossil record. The problem is that the record is incomplete and only hard structures were preserved. 2. By studying recent species, looking at the shared characters, both homologous and analogous characters, give evidence of evolutionary past. Types of Evolutionary TreeEdit There are many different types of evolutionary trees and each one represents something different. One type of evolutionary tree is called the rooted phylogenetic tree. This tree is a direct type of tree that contains a special node corresponding to the most recent common ancestor of all the entities at the leaves of the tree. The use of an uncontroversial outgroup is one of the most common techniques used for rooting trees. In other words, rooted trees typically illustrate the sequence or trait data of a particular outgroup. Another type of evolutionary tree is called the unrooted tree. Unrooted trees often illustrate the relationship between different leaf nodes. This is done without making any assumptions about the ancestry. Unlike that of the rooted trees where a sign of ancestry identify is needed, unrooted trees can always be created from rooted ones by omitting the root. Basically, an unrooted tree is generated by introducing assumptions about the relative rates of evolution on each branch or including an outgroup in the input data. An application that is often used in this process is called the molecular clock hypothesis. Last but not least, bifurcating tree is also a type of evolutionary tree. In bifurcating tree, both rooted and unrooted phylogenetic trees can be multifurcating or bifurcating, and can be shown as labeled or unlabeled. For example, a rooted tree that has been multifurcated may have more than two children at the nodes, unlike that of the unrooted multifurcating tree where more than three neighbors can appear at the nodes. Furthermore a rooted tree that has been bifurcated has exactly two descendants arising from each interior node that typically forms a binary tree. On the other hand, an unrooted bifurcating tree is similar to that of an unrooted binary tree. An unrooted binary tree often has a free tree with exactly three neighbors at each internal node. In terms of labeled and unlabeled trees, labeled trees has unique values assigned to its leaves, while an unlabeled tree also known as a tree shape is define as a topology only. Overall, the number of possible trees for a given number of leaf nodes depends on the specific type of tree one is looking at. However, there are always less bifurcating trees than multifurcating trees. Similarly, there are less unlabeled than labeled trees and less unrooted than rooted trees. In terms of labeling the tree, it is always important to know that the letter "n" represents the number of leaf nodes. Sequence Alignment in Evolutionary TreesEdit Evolutionary trees can be made by the determination of sequence information of similar genes in different organisms. Sequences that are similar to each other frequently are considered to have less time to diverge. Whereas, less similar sequences have more evolutionary time to diverge. The evolutionary tree is created by aligning sequences and having each branch length proportional to the amino acid differences of the sequences. Furthermore, by assigning a constant mutation rate to a sequence and performing a sequence alignment, it is possible to calculate the approximate time when the sequence of interest diverged into monophyletic groups. DNA can be amplified and sequences with the development of PCR methods. Mitochondrial DNA from a Neanderthal fossil was identified to contain 379 bases of sequence. When compared to the Homo sapiens, only 22 to 36 substitutions in the sequence was found as opposed to 55 differences between homo sapiens and chimpanzees over common base in the same region. Further analysis suggests that the Homo sapiens and Neanderthals shared common ancestry 600,000 years ago. This reveals that Neanderthals were not intermediate between chimpanzees and humans, but rather an evolutionary dead end, which became extinct. Sequence alignments can be performed on a variety of sequences. For constructing an evolutionary tree for proteins, for example, the sequences are aligned and then compared via likeness to construct a tree. In the globin tree above, it is then possible to see which protein diverged first. Another example typically uses rRNA(ribosomal RNA) to compare organisms, since rRNA has a slower mutation rate, and is a better source for evolutionary tree construction. This is best supported by Dr. Carl Woese's study that was conducted in the late 1970s. Since the ribosomes were critical to the function of how living things operate, they are not easily changed through the process of evolution. Significant changes could allow the ribosomes to not do its role, therefore having the gene sequence of it is conserved. Taking advantage of this, Dr. Woese compared the minuscule differences in the sequences of ribosomes amongst a great array of bacteria and showed how they were not all related. Looking at extreme bacteria such as methanogens, was not able to connect them to eukaryote or proselytes because they fell within their own category of archaea. Example of Phylogeny Tree for the Domain EukaryaEdit Constructing the phylogeny tree requires systematists to search for synapomorphies (shared derived characters) and symplesiomorphies (shared ancestral characters) characteristics. Reading the Phylogeny tree: The numbers in the diagram indicate the synapomorphies or shared derived characters unique only for those group or groups . The diagram shows that there are three domains: Bacteria, Archaea, and Eukarya. The domain Eukarya and Archaea possess (1) have introns, histones, and RNA polymerase similar to eukaryotic RNA polymerase. Furthermore, the domain Archaea possesses (2) unique lipid content in membranes and unique cell wall composition. Domain Eukarya The synapomorphies for Eukarya are (5) have a nucleus, membrane-bound organelles, sterols in their membrane, cytoskeleton, linear DNA with genomes consisting of several molecules, and a 9+2 microtubular ultrastructure flagella. Based on DNA sequence data, there are four Supergroups to this domain: Excavata, Chromalveolata, Unikonta, Archaeplastida (red algae, green algae, plants). Supergroup Excavata: There are three phyla for this Supergroup: Parabasalia, Euglenophyta, and Kinetoplastida. Euglenophyta doesn’t have cell wall, but have (8)flexible pellicle within the cell membrane. It has chlorophyll A amd B as in plants - which it obtained by secondary endosymbiosis, green lineage. Parabasalia possesses (9) a reduced or lost mitochondria. Kinetoplastida has (10) single large mitochondrio/kinetoplast, which edit mRNAs. Rhizaria's Phylum Foraminifera possesses (11) multichambered shells made of organic material and CaCO3. Synapomorphies for straminopila and rhizaria are (12) chloroplasts by secondary endosymbiosis, red lineage. Stramenopila has (13) two unequal flagella, one longer tinseled. There are three major phyla for Stramenopila. Phylum Bacillariophyta (diatom) has (14) cell walls of hydrated silica in organic matrix, made up of two halves: “box and lid.” Phylum Phaeophyta (Brown algae)has (16) multicellular sea weeds. Phylum Oomycetes (water molds, downy mldews) has (15) a loss of chloroplasts. Alveolata has (17) a membrane-bound sac under plasma membrane. There are three phyla. Phylum Dinoflagelleta possesses (19) plates of cellulose-like material, grooved. Phylum Ciliophora has (20) two types of functionally different nuclei: macronucleus (controls metabolism) and micronucleus (function in sexual reproduction). Phylum Apicomplexa has (18) apical structure for penetrating host cells. Supergroup Unikonta has (6) a triple pyrimidine biosynthesis fusion gene and has one flagellum Amoebozoa has (21) broad pseudopodia. It's Phylum Gymnamoeba (22) feed and move by lobed pseudopodia. Opisthokonta (Fungi and Animalia) possess (23) flagellum posterior. Fungi has (24) cell walls made of chitin, absorptive heterotrophy, and are multicellular. There are four Phyla of Fungi. Asides from Phylum Chytridiomyota (water molds), all other Fungi Phyla has a (25) loss of flagellum and time separation between plasmogamy and karyogamy. Phylum Zygomycota has (26) heterokaryotic state of reproduction limited to zygosporangium. Both Basidiomycota and Ascomycota possess (27) conidia, extensive (n+n) state, size and duration, septate hyphae, and macroscopic fruiting bodies. Phylum Basidiomycota has (28) long-lived dikaryotic mycelium in dikaryotic state, meiospores produced in special cell called basidium, and are sex predominant (asexual spores rare). Phylum Ascomycota has (29) meiospores produced in special cell called ascus. Kingdom Animalia are (30) multicellular, possess extracellular matric with collagen, proteoglycans, and special types of junctions between cells (cell adhension proteins). Phylum Porifera (sponges) has (31) spicules, internal aquiferous system. Subkingdom Eumetazoa has (32) body symmetry, primary germ layer (true endoderm and ectoderm), true tissues and organs, epithelial tissue, and nervous tissue. Radiata has (33) primary radial symmetry. Phylum Cnidaria has (34) a mesoglea, and a cnidocytes (with nematocysts). Bilateria has (35) bilateral symmetry, body cavity (coelom), mesoderm (triploblastic), and muscle. Two mahor phylogenetic branches are Protostomia and Deuterostomia. Protostomia has (36)a schizocoelous, and the blastopore becomes its mouth. Deuterostomia has (37) entercoelous, indeterminate cell fate, radial cleavage, and the blastopore becomes the anus. Phylum Echinodermata has (38) a water vascular system, tube feet, and has radial symmetry in adults (bilateral larvae). Hemichordata has (39) pharyngeal slits at some state of life. Phylum Chordata has (40) muscular post-anal tail, dorsal follow nerve cord, and notochord. Waggoner. Ben. "Archaea: Systematics." 15 Nov. 2009. <http://www.ucmp.berkeley.edu/archaea/archaeasy.html> http://en.wikipedia.org/wiki/Phylogenetic_tree
Activity 1: Engaging Students' Background Experience Prior To, During, and After Reading (adapted from a guideline provided in "Learning to Learn from Text: A Framework for Improving Classroom Practice," by Robert J. Tierney and P. David Pearson from Dishner, Eadence and Beans, and READING IN THE CONTENT AREAS: IMPROVING CLASSROOM INSTRUCTION, Kendall Hunt, 1992). Post several large sheets of newsprint on the blackboard or wall at the front of the room. Begin by asking students what they think of when they hear the words "immune system." As students respond, write what they say on the board in columns that reflect categories that you have determined in advance. Some of the categories might include functions, processes, and diseases that affect the immune system. Once students have finished responding, look at the columns together and designate category titles for them. Ask students to read a chapter on the immune system to learn more about it. You can use a text such as HUMAN BIOLOGY AND HEALTH (Prentice Hall, 1993) or THE BODY BOOK (Workman Publishing, 1992). When students have finished reading, return to the set of categories you have written on the board. Ask students to add new terms they have acquired from their reading to the lists compiled before reading. Keep the newsprint posted on the wall for the duration of this unit. As new information is learned through the course of study, add it to the chart. Revisit the chart periodically to summarize the new information. Activity 2: Using Visual Aids to Understand Descriptions of Immune System Processes. Prepare two handouts, each containing drawings of an immune system process: one showing how antibodies fight disease (from THE HUMAN BODY: YOUR BODY AND HOW IT WORKS, pp. 64 65), the other depicting phagocytes swallowing germs (from THE BODY BOOK, p. 224). Omit the explanatory text that accompanies these illustrations. Prepare two additional handouts, each of these containing the explanatory text for one of the two processes. Label these handouts. Prepare a set of handouts for each group of four or five students. Divide the class into groups and distribute the handouts. Ask groups to read the descriptions of each process and to match texts with illustrations. Reconvene as a whole class and have groups report back. If there is disagreement about which text belongs with which illustration, ask groups to give evidence that supports their decisions and to point out the elements of the illustration which conform to the description provided by the text. Activity 3: Using Visual Aids to Understand the Sequence Which Takes Place When a White Blood Cell Encounters Bacteria. Distribute a paragraph describing what happens when a white blood cell encounters bacteria (e.g., p. 194 in HUMAN BIOLOGY AND HEALTH). Ask students to read silently. Divide the class into groups or pairs. Distribute in random order pictures of the process described in the text and ask students to put them in the correct sequence. Reconvene as a class. As groups report back, have them discuss how elements of the text provide support for the sequence upon which they have decided. Activity 4: Completing A Features Analysis to Compare and Contrast the Different Types of Immune System Cells (charts adapted from Jerry L. Johns and Susan Davis Lenski, IMPROVING READING: A HANDBOOK OF STRATEGIES, Kendall/Hunt Publishing Company, 1997). Prepare Features Analysis Charts with these headings: Kinds and Features. Under the heading "Kinds," different types of immune cells should be listed (e.g., T-cells, B-cells, phagocytes, mast cells). Under the heading "Features," sub-categories of kinds of features should be added (e.g., functions, physiological properties, place of origin, etc.) Divide the class into groups and distribute charts. Have students draw from information in texts on the immune system (see above) to complete charts. Reconvene as a class. Have groups compare their charts and discuss which parts of the text they drew upon to complete them. As students discuss their charts, create a class chart. Use the chart to write a class comparison/contrast essay examining the features of two types of immune system cells and modeling rhetorical/ stylistic elements students will need to practice to prepare for the GED exam. Have students copy the essay into their notebooks to use as a model for writing an at-home composition. Divide the class into groups again. Ask students to use the sections of their texts that describe viruses and bacteria in order to do another Features Analysis Chart, this time comparing and contrasting types of infectious agents. Repeat Step Three. Activity 5: Completing an Outline on Different Types of Vaccines. Prepare an outline of the section on vaccines from the text you are using (pp. 227231 in THE BODY BOOK). Provide some of the information in the section and omit some. Here's an example. Properties of Different Types of Vaccines (all contain low levels of antigens) Typhoid Fever and Whooping Cough Does not contain bacteria Have students work in groups to complete the outline, and to include as many relevant descriptors as they can. Ask groups to report back and share outlines. Have students choose one disease to research and write about. Have them write about symptoms produced by the disease, as well as how and when each vaccine was developed. Have students use their Features Analysis Chart on infectious diseases to write a short essay comparing and contrasting two infectious agents. Research on AIDS and Ebola: Divide the class in half. Have half the class read THE HOT ZONE, and the other half THE GEOGRAPHY OF AIDS. Prepare guiding questions for different sections of the books. Questions should help students pay attention to the properties of the diseases described in these books (Ebola and AIDS, respectively), including: - how they are transmitted and how they affect the body. - how each disease spreads geographically and what has been done to prevent the spreading of the viruses. As students read these texts, have them use the guiding questions to engage in small group discussion and in summary and essay writing. When students have finished reading, divide them into groups. Have each group write a report and make a presentation on either the epidemiology or the transmission/replication process of the disease or syndrome. Comparing Present and Past Plagues Have students research and write group reports comparing the spread of AIDS with the spread of the bubonic plague. Ask students to include their ideas on how technology and culture affected the geographical scope of the spread as well as the treatment of plague victims.
Single cell organisms evolve around underwater volcanic vents and many of them contain photo receptor proteins. It’s 3,550 million years ago and if we were to compare it to a human life time, the earth is just 25 years old. This is a short summary of the 150 million years around this ‘birthday’. The first known oxygen-producing bacteria had occurred just before this date, a trifling 50 million years actually, but by now, this ‘Day 38’ in the life of the Earth’s History, we had arrived at the single cell level. Volcanic vents under the earth’s oceans allow mineral rich gas to dissolve in the sea water and provide sustenance for these single cells an event that prospers to this day. Within some of these cells, proteins reacted to photons of light and this was going to be the start of something big, much much later in complex life. Not just reacting to light photons, eventually full bodied, colour perceptive sight, in a small frequency range, what we call the ‘visible’ light spectrum. If we now tack on another 50 million years, not much has changed but above the ocean surface, large areas of land are visible and Australia is part of a large continent at the south pole. Much later, it breaks away, leaving Antarctica to freeze in the cold currents that eventually encircled it. By the passing of another 50 million years, Australia is still firmly attached but Cyanobacteria (blue-green algae) had begun mass producing oxygen and some, living in Stromatalites, a sort of rock cabbage, still live in the salty water of Shark Bay in Western Australia, still consuming hydrogen and pumping out the waste product oxygen that was to prove so poisonous to the only life the planet had so far produced. We will cover all these in much more detail soon. If you would like to help with this grand project, this project of great imagination, this almost impossible project of writing ‘the entire history of the universe and the earth’ – in chronological order, why not consider joining us? We could use your help and your comments and guidance may be of great importance. Share this post on Twitter.....by
What else to consider before entering a ‘simple’ conversation lesson? 6. Provide feedback at the end of the task: This can be done in open class straight away or again in pair-work or group-work, then in open class. Traditionally we consider two types of feedback. – content feedback: emphasizes good ideas, solutions to the problem or surprising statements and shows the students that they were listened to during the task. – language feedback: points out (correct or incorrect) grammar and lexical structures used by the students and aims at motivating them (in the first case) or making them notice their mistakes (in the latter one). Classical ways of language feedback are: – the teacher writes 3 sentences he/she heard from students, two incorrect and one correct examples. Students find the incorrect ones and correct them (in groups, pairs or in open class); – the teacher elicits how to express an idea, collecting good examples from students (How can you refuse an invitation politely?); – the teacher points out typical mistakes and clarifies them, e.g. in an auction game, probably in the next lesson, since it needs preparation or in an error-soccer game. 7. Repeat the task: Giving students the same task helps them to unload their first pressure of thinking about content/finding the right form/pronouncing their ideas, and it also gives them the opportunity to do the same activity better than the first time, since they know what mistakes to avoid or what lexical items to use. In task-based teaching, the teacher gives the task at the beginning of the lesson (as a fluency exercise) and checks what students know about the target language, then after language teaching, he/she makes the students repeat the task and this time the teacher monitors for the accurate use of the target language. In a classical conversation lesson, the students can be given the same task (possibly with new partners) after the feedback-session (if time allows) and the teacher is supposed to monitor this second time for the language points discussed previously. An excellent way of finishing a conversation lesson is to insert a quick game (a board or card game) which recycles the same language points, but in a fun way. 8. Recycle your materials: Everything you prepare today, can and will be useful in a future lesson. After some time every teacher has his/her ready-to-use ideas for every level. They will also have their jolly-joker topics per level for substitution lessons. So from the beginning follow these guidelines: – prepare your materials on the PC if possible, and save the files using clear names that helps you to identify the file later on. Names like ‘conv.q.elem’ or ‘traffic_int’ are not really useful, since you don’t know what topic you dealt with in the first case and what type of activity you did in the second. Use names like ‘int_traffic_vocab’ or ‘int_traffic_conv_q’, so you can understand that in the first one you dealt with lexis linked to traffic, in the second you prepared questions for a discussion about traffic, both at intermediate level. This way, you can always find these materials on your PC and reuse them. – laminate the best exercises: if one exercises has worked very well 3-4 times, you might want to make the prompts (pictures, maps, cards, question slides, etc.) more lasting and laminate them. This way, even if many students hold them in their hands in many lessons, they will resist time. Consider how long it takes to cut the conversation cards every time you need them: you save all this preparation time in the future. – file your materials: have (shoe) boxes for every level ready to keep your materials there, have your folders for the best exercises you want to use in the future, this way you have them ready for any emergency case or normal lesson preparation. This saves again a lot of time for you. 9. Evaluate your lesson: Ask your students/yourself at the end of the lesson what they have learnt. Every conversation lesson (as every lesson) needs target language, an aim why we teach what we teach. Students need to be able to express orally something extra before they leave the classroom. If this is the case, then you did a conversation lesson. If the answer is that they spoke a lot, then you’ve only had a lovely chat with them. Next time offer at least some coffee to them. Good news is that you will find plenty of materials ready to print (even for free) online. You just have to type in your search engine the topic, level and some keywords of the type of activity you want to do with your class and you will find something: www.onestopenglish.com or www.busyteacher.org are only two of the many websites that offer help to teachers. Have them saved among your favorites and start creating your own materials. And above all, have fun with your conversation classes.
3 Mass Mass is the quantity of matter which an object contains. It is a scalar quantity. Weight is a force; it is the gravitational pull of the earth on an object. SI unit of weight is thus newton. The standard unit of mass is the kilogram. The kilogram is the mass of a particular cylinder of platinum-iridium alloy. The cylinder is kept at the International Bureau of Weights and Measures at Sevres, near Paris. Sub multiples are: 1 microgram = 1µg = 10−6g = 10−9kg (mass of a very small dust particle) 1 milligram = 1mg = 10−6kg 1 gram = 1g = 10−3kg (mass of a grain of salt) (mass of a paper clip) The mass of an object remains the same everywhere, but the weight of an object varies from place to place across the surface of the earth. The weight of an object increases slightly as we go from the equator to the poles. Weight also decreases as we go further away from the surface of the earth. Chemical balance, lever balance and some direct reading balances are used to measure the mass of an object. Spring balance is used to measure the weight of an object.
Chicano Movement. Riding the wave of the American Civil Rights movement, the Chicano Movement was not far behind. While it is well known that African American citizens were discriminated against heavily until Martin Luther King Jr. led a protest against it, it is much less common knowledge that Mexican Americans and other Americans of latin descent experienced a similar problem. Adrian Reyes Consuelo Lopez MAS 141 5 May 2014 Art in the Chicano Movement The Chicano movement began in the 1960s with many social problems that minorities wanted to raise awareness and fix. The Chicano movement can also be called “El Movimiento”. The movement focused on political and civil rights that people thought were not being addressed. While the students who organized and carried out the protests were primarily concerned with the quality of their education, they were also motivated by the high minority death toll in the Vietnam War and the ongoing civil rights campaigns of the Chicano Movement. The Chicano Student Movement was the part of the Mexican American Civil Rights. Chicano movement essay Lassie February 19, 2017. To 1972, essays, georgia, to gain equal rights movement papers. For african-american civil rights history of chicano by kathleen robles. The word, they were the whole family, jr. Who led the chicano or chicana also spelled xicano, xicana is the status quo that speak to chicano? Chicano - a political term made popular in the sixties with the Chicano Civil Rights Movement which followed the example of the Black Civil Rights Movement. The people of the Movement adopted the word Chicano for themselves just as the African Americans had adopted Black. Chicano Movement Riding the wave of the American Civil Rights movement, the Chicano Movement was not far behind. While it is well known that African American citizens were discriminated against heavily until Martin Luther King Jr. led a protest against it, it is much less common knowledge that Mexican Americans and other Americans of latin descent experienced a similar problem. The Chicano Movement, also known as El Movimiento, was one of the many movements in the United States that set out to achieve equality for Mexican-Americans. The Chicano Movement began in the 1940 's as a continuation of the Mexican American Civil Rights Movement, but built up strength around the 1960’s after Mexican-American youth began to label. The Chicano Civil Rights Movement is an extension of the Mexican American Civil Rights Movement which began in the 1940s with the stated goal of achieving Mexican American empowerment. Farm worker’s rights were the main issue that was expressed the mission of the Chicano Movement. The government made laws about the legal importation of migrant farm workers. Civil rights activism in Washington’s Mexican-American community has its roots in two locales, the rural agricultural communities of the Yakima Valley and the Seattle urban area. This dualistic geography is reflected in the movement's activities, which by the late 1960s united the farm workers' struggle in the eastern portion of the state with campaigns targeting community and educational. Mexican Americans have fought for rights, dignity, and cultural freedom since 1846. In the 20th century the fight took new forms. Founded in 1929 and modeled after the NAACP, LULAC and later the Mexican American Legal Defense Fund and the American GI Forum operated as classic civil rights organizations, using persuasion and legal action to defend Mexican Americans. Essay The History of Chicano Music. The History of Chicano Music My both my father and uncle were in their prime during the 1960s and 70s during the Chicano Movement. My father had me growing up listening to dedications Art Laboe 's Killer Oldies every Sunday night. My uncle traveled throughout California with bands of his own since the 1970s. The civil rights movement comprised efforts of grassroots activists and national leaders to obtain for African Americans the basic rights guaranteed to American citizens in the Constitution. The key players in succeeding with the civil rights movement were the soldiers returning from the war, Martin Luther King Jr, Malcolm X, The Student Nonviolent Coordinating Committee (SNCC), and the anti.
CMST& 102 Introduction to Mass Media • 5 Cr. Examines the operation and impact of American media. Students analyze media influence on society and the relationships among media, audience, and government. Current events and issues are discussed. After completing this class, students should be able to: Analyze the impact of media messages on American culture, values, and political process. Describe the historical and economic forces that shaped and continue to shape mass media. Explain the significance of the First Amendment and explain its relevance to current affairs. Compare and contrast American commercial media system with non-commercial media in the United States and other countries. Analyze how content is shaped by the nature of particular media. Apply media effectively to communicate with a particular audience.
This sequence of nine true-color, narrow-angle images shows the varying appearance of Jupiter as it rotated through more than a complete 360-degree turn. The smallest features seen in this sequence are no bigger than about 380 kilometers (about 236 miles). Rotating more than twice as fast as Earth, Jupiter completes one rotation in about 10 hours. These images were taken on Oct. 22 and 23, 2000. From image to image (proceeding left to right across each row and then down to the next row), cloud features on Jupiter move from left to right before disappearing over the edge onto the nightside of the planet. The most obvious Jovian feature is the Great Red Spot, which can be seen moving onto the dayside in the third frame (below and to the left of the center of the planet). In the fourth frame, taken about 1 hour and 40 minutes later, the Great Red Spot has been carried by the planet's rotation to the east and does not appear again until the final frame, which was taken one complete rotation after the third frame. Unlike weather systems on Earth, which change markedly from day to day, large cloud systems in Jupiter's colder, thicker atmosphere are long-lived, so the two frames taken one rotation apart have a very similar appearance. However, when this sequence of images is eventually animated, strong winds blowing eastward at some latitudes and westward at other latitudes will be readily apparent. The results of such differential motions can be seen even in the still frames shown here. For example, the clouds of the Great Red Spot rotate counterclockwise. The strong westward winds northeast of the Great Red Spot are deflected around the spot and form a wake of turbulent clouds downstream (visible in the fourth image), just as a rock in a rapidly flowing river deflects the fluid around it. The equatorial zone on Jupiter is currently bright white, indicating the presence of clouds much like cirrus clouds on Earth, but made of ammonia instead of water ice. This is very different from Jupiter's appearance 20 years ago, when the equatorial zone was more of a brownish cast similar to the region just to its north. At the northern edge of the equatorial zone, local regions colored a dark grayish-blue are places where the ammonia clouds have cleared allowing a view to deeper levels in Jupiter's atmosphere. Interrupting these relatively clear regions is a series of bright arrow-shaped equatorial plumes. The most obvious one is visible just above and to the right of center in the third and ninth frames. These plumes resemble the `anvil' clouds that accompany common summer thunderstorms on Earth, although the Jovian plumes are much bigger, and their somewhat regular spacing around the planet suggests an association with a planetary-scale wave motion. The southwest-northeast tilt of these plumes suggests that the winds in this region act to help maintain the eastward winds at this latitude. In the dark belt north of the equatorial zone, a turbulent region with a white filamentary cloud is visible in the sixth frame, indicating rapidly changing wind direction. Several white ovals are visible at higher southern latitudes (toward the bottom of the fourth, fifth, and sixth frames, for example). These ovals, like the Great Red Spot, rotate counterclockwise and are similar in some respects to high-pressure systems When these images were taken, Cassini was about 3.3 degrees above Jupiter's equatorial plane, and the Sun-Jupiter-spacecraft angle was about JPL manages the Cassini mission for NASA's Office of Space Science, Washington, D.C. JPl is a division of the California Institute of Technology in Pasadena. Credit: NASA/JPL/University of Arizona. (PIA02825A) For higher resolution, click here.
Difference between DBMS and IRS by focusing on their functionalities. A Database Management System (DBMS) is a software system that uses a standard way of classifying, retrieving, and running queries on data. The DBMS functions is to manage any incoming data, organize it, and provide ways for the data to be modified or extracted by users or other programs. Some examples of DBMS are PostgreSQL, Microsoft Access, SQL Server, FileMaker, Oracle,Clipper and FoxPro. Since there are so many database management systems are available, so it is important to ensure that they communicate with each other. Differentiate between database management system and information retrieval system by focusing on their functionalities. Answer Database Management System A database management system (DBMS) is the main software tools of the database management approach because it controls the creation, maintenance and use of the databases of an organization and its end users. There are several functions that a DBMS performs to ensure data integrity and consistency of data in the database. There have ten function of database management system. there are data dictionary management, data storage management, data transformation and presentation, security management, multiuser access control, backup and recovery management, data integrity management, database access languages and application programming interfaces, database communication interfaces, and transaction management. The DBMS has a function that can be differentiate from the information retrieval system. The DBMS have the ability to store, update and retrieve the data. This is the main function of the DBMS because the database can be used if there is any record is being stored into the database. The record need to be retrieve first, then it can be change by the database administrator as it will be the record has been updated. The DBMS will protect the structure of the data structure. b) DBMS (Crucial Concept): The DBMS would be responsible for all database activities (storage, retrieval, indexing, etc) and also be responsible for keeping a detailed description of the data being held. DBMS is a program that helps users to communicate with the Operating System through an interface in order to access the data from a Database in a friendly way and as soon as possible. It allows users to store retrieve and update information quick and productive. DBMS handle to recover the database in case of system error and needs to have an organized system for security issues. c) Metadata – Data that Describes Data Metadata it’s all about data being held in a Database. The OODBMS is the product of merging object oriented programming ethics with database management ethics. Object oriented programming concepts such as encapsulation, polymorphism and inheritance are imposed as well as database management concepts such as the ACID properties (Atomicity, Consistency, Isolation and Durability) which show the way for system reliability, it also supports an ad hoc query language and secondary storage management systems, which is allocated for managing very large amounts of data. The Object Oriented Database program specially lists the following features as compulsory for a system to support before it can be called an OODBMS; Composite objects, Object uniqueness, Encapsulation, Types and Classes, Class or Type Hierarchies, Overriding, overloading and late binding, Computational fullness, Extensibility, Perseverance, Secondary storage management, Concurrency, Recuperation and an Ad Hoc Query capacity. Now from the above mentioned description, an OODBMS should be able to store objects that are nearly impossible to differentiate from the kind of objects supported by the board programming language with as little limitation as feasible. Persistent objects should belong to a class and can have one or more infinitesimal types or other objects as attributes. This is a collection of related data without any doubt meaning and thus is a database. The collection data usually referred to as the database, have something information directly connected to an enterprise. The main goal of a DBMS is to provide a way to store up and save some of database information that is both well-located and useful. From data, the user known facts that can be recorded and that have trusted the meaning. For example, consider the names, telephone numbers, and addresses of the people may we know. Tables have key fields, which can be used to identify unique records. Keys relate tables to each other. The rows of the relation are also called tuples, and there is one tuple component for each attribute – or column – in that relation. A relation or table name, along with those relation’s attributes, make up the relational schema. Relational Database models are server-centric. The concurrent updates can be crucial as to make sure the updates are made in a right way and the end result is accurately. It's can analyze user queries and represent the queries in a form that compatible with the database. It's also able to do the recovery process so that the information will be secure and safe. It's search statement will be match with the stored database as using the Information Retrieval System HIGHLIGHT THE DIFFEENCE BETWEEN DATA AND INFORMATION. Figure 1 There’s a different between data and information in terms of the meaning. 2. Security management. DBMS creates a security system that enforces user security and data privacy in the database. Security guideline determine which users are able to access the database, which data item for each user may access, and which data operations such as read, add, delete, or modify that the user may perform. 3. Reading and writing of the data is performed at System buffers. As for the functions of DBMS, it is a query processing (SQL) and optimization, query optimization determines the optimum strategy for a query execution. Then, security control, DBMS have includes a prevent of unauthorized access to the database. Data Integrity in DBMS includes the facility for enforcing integrity constraints whenever a change is made to the data to ensure that the database is consistent and no error. Lastly, the functionality of DBMS is recovery, the DBMS must take steps to ensure that if the database fails, it will remain at a consistent state which that the data will not be lost.
A science abstract illustration is just a computer produced photograph or image of a thing which has technological info. Applications packages created it and lets users observe the thing like it were an object inside the real world. The software produces a virtual image including plant skull, being a sculpture, or diamond ring. A mathematics case might be handy for education purposes, such as for example designing tools, or teaching students about the scientific procedure, the evolution of theories. A science instance is made in two steps. paraphrase text online The first step involves the introduction of the picture of the thing. The image is employed as the basis for generating the first scientific idea. The next measure includes the inception of a simple description of the scientific image. The text description comprises all the important facts, information, and information around the thing which can be shown in the graphic. Scientists characterize the information included in graphics at just two ways. The very first method is by way of terminology that features other facets of the object, and also descriptions of the dimension, contour, texture, shade of the object. paraphrasegenerator.org/reword-a-paragraph-generator/ This type of terminology can be called description. In the second method, the scientific information is described by boffins. By way of instance, they might describe the feel of a thing using speech like”a metallic look to its surface” But there is a scientific case in point also helpful for students and teachers who want to learn notions. To do so, they use the computer generated case to help describe scientific information and the items contained in the picture. There are several ways to produce a mathematics subjective case. But if you’d like to create a digital image to get a university student’s project, below are a couple tips. Your first step must be to choose what topic will be addressed in assembling your project. By way of example, if you’re training your class about marine life, then you’ll need to produce an image of the sea turtle or a shark. http://www.umich.edu/~reuchem/ These tips can help you make the scientific picture for your project. One surefire solution to help create a great picture that is scientific will be always to believe of what information you’re going to be in a position to study on the info that is scientific. For example, about how a sea turtle uses its flippers in the event that you are teaching, you should need to focus on the flippers’ movement is closely related to the water stream in the body’s entire own body. You may choose the area of the photograph where you wish to concentrate, once you’ve determined the information you’ll be able to learn. You may have the ability to come across a more general area to center on, In the event is regarding the physical appearance of this turtle. For producing a scientific image The following tip is to include. As an example, in the event you will show about the way the movements of these flippers are associated with the drinking water leak in the turtle’s body, you can choose to include a huge photo of the flippers together with the image you are likely to utilize while the written writing description. This will definitely help it become a lot easier for college students to understand the details that is applicable. When creating a job working with a science abstract case, you may probably wish to offer the choice to jump back or forwards into areas of the project to college students. You are able to allow pupils to check at a moment at several graphics. This can allow them develop their notions based on and to make comparisons involving the pictures. You’ll also desire to produce a science abstract example that has a lot of pictures. You’ll require the capability to highlight different images and change the images because possible modify the areas of this experimentation. This can help pupils to keep in mind then graphics aren’t and that which pictures will be related to this endeavor. You could also include things like visible text to simply help them remember they ought ton’t and which images they should give consideration to. Developing a mathematics example is hard to make, Because you are able to observe. You just need to obey a few basic actions to produce sure to find the best outcomes. And keep several critical hints so you can cause the images that are scientific.
Scholars watch a video of people reflecting on their experience with an earthquake. As they watch the video today, they reflect on the following questions: 1. What is the narrator's perspective regarding earthquakes? 2. How does that influence the way in which they describe the event? 3. What evidence from the video supports your response? After the video I give them 2 minutes to jot down their thinking. I imagine they will have a bit of a tough time, so I may do a bit of the following think aloud to support them: "The narrators seem to think that earthquakes are surprising. This influences the way that they describe the event because they describe the effects of the surprise. For example, the one narrator describes her cat being sprawled on the floor when the earthquake happened." Here is an example of a Scaffold to help answer questions above. Then, they have 1 minute to share with a friend and I pull 2 friends from my cup and I take 1 volunteer. We do a cloze re-reading of pages 29-31 of Earthquake Terror. When we do cloze readings, we all have access to the same text and I read aloud, pausing on certain words. Scholars read the words out loud as I pause upon them. This enhances engagement and gives all students access to the text. I emphasize that although we are reading the same passage that we did earlier this week, we are reading to do something different. Today, we answer and think about the following questions as we read: 1. Who is the narrator? 2. What important events are described? 3. How does the narrator's perspective influence the way in which events are described. We discuss the answer to the first question (the narrator is a third-person who is telling the story from Jonathan's perspective). I model recording the answer to the second and third questions on the Graphic organizer. Scholars copy my response onto their graphic organizer so that they have a model when it is their turn to work. Here are Scholars taking notes on the graphic organizer. Scholars break into heterogeneous pre-assigned (by teacher) partnerships. They move to any place in the room and re-read pages 32-35 of Earthquake Terror and record the important event and the way in which the author describes the event. Some students may need an explanation for why we are re-reading, and you can always share that each time we read, we gain something different from the text. Also, if we've already read something it is easier for us to think deeply about that text the second or third time we read. During Partner Reading, scholars do re-read and record the important event ant the way the author describes the event for the following sections: I pre-assign pages because I want to make sure that they pick out the most important events. This is a type of scaffold for all scholars. Also, determining how the narrator's perspective influences the way in which events are described is a multi-step skill and scholars need mastery of many strategies to do this successfully. Therefore I provide more support. As scholars are reading to one another, my ELL co-teacher supports 1-2 groups who need the most intense support (scholars who recently moved to the USA). I circulate and ask questions and ensure that engagement is high and do quick on-the-spot checks for understanding. Scholars have 20 minutes to do this. After 20 minutes they return to seats and answer the following question in tables: How does the narrator's perspective influence the way in which events are described? Scholars practice PCR (Prose Constructed Response) responses in notebooks and 1 person from the group takes notes on their dry erase board. If time allows, scholars will do a gallery walk (get up, walk around and agree with or disagree with other scholars' responses). During this time in my first class scholars rotate through 2 stations. *In my second class, the ELL teacher pulls the Socratic Seminar group and teaches pre-skills that help them to be successful with author's perspective. I pull the 5 scholars who are currently reading on a 2nd grade reading level and support them with their Checklist work. I then spend 20 minutes targeting a phonics skill and practicing fluency with books that are on their level. In my first class, I start the time by reviewing our Checklist items for the week and explicitly state what should be completed by the end of the day. This holds scholars accountable to their work thereby making them more productive. Then, the ELL teacher and I share the materials that our groups will need to be successful (i.e. a pencil and your book baggies). Then, I give scholars 20 seconds to get to the place in the room where they will be for the first rotation. The first scholars who are there with all materials they need receive additions on their paychecks or positive PAWS. During the rotations for this lesson, my small group objective today is to identify the narrator and important events in books that are on each group's highest instructional level. Scholars read a portion of the same book (different for each group depending on reading level, but the same text is read in each group). Then we discuss narrator's perspective and important events. For my higher groups, we will actually compose a PCR response. After the first rotation, I do a rhythmic clap to get everyone's attention. Scholars place hands on head and eyes on me so I know they are listening. Then they point to where they go next. I give them 20 seconds to get there. Again, scholars who are at the next station in under 20 seconds with everything they need receive a positive PAW or a paycheck addition. We practice rotations at the beginning of the year so scholars know if they are back at my table, they walk on the right side of the room, if they are with the ELL teacher, they walk on the left side of the room and if they are at their desks, they walk in the middle of the room. This way we avoid any collisions. At the end of our rotation time I give scholars 20 seconds to get back to their desks and take out materials needed for the closing part of our lesson. Timing transitions helps to make us more productive and communicates the importance of our learning time.
Frogs and toads (Order: Anura) live in freshwater habitats around the world. They’re also becoming increasingly popular as pets. Owners must provide their frogs with the correct nutrients to survive and thrive. Feeding a proper diet requires understanding their life cycle and ecology (how they interact with other organisms and their environment). In this article, we’ll answer some critical questions about the anuran diet, such as: - What do frogs eat? - What do tadpoles eat? - How often do frogs require food? - Which foods are most nutritious for frogs? - How are frogs adapted to catch and eat their prey? …and much more! Table of Contents General Frog Diet Adult frogs are predators, meaning that they eat other animals. The vast majority of adult frogs gain little or no nutrition from plant material. Most species feed on insects and other invertebrates. Adult frogs can be aquatic, semi-aquatic, or terrestrial. Terrestrial and semi-aquatic frogs feed primarily on a diet of worms, insects, and arachnids. Different foods are available in the aquatic environment, so aquatic frogs’ diets differ from those of terrestrial species. Aquatic prey items include worms, aquatic insect larvae, small fish, and fish fry. Frogs are – for the most part – generalist predators. They will eat any animal that fits into their mouth! Large anurans can (and will) take on small mammals, other amphibians, and even snakes! Some frogs, such as the African clawed frog have slightly unique preferences. Some species have unique adaptations that allow them to catch certain prey. We’ll discuss “Frog Feeding Behaviour and Adaptations” in detail later on. Though they aren’t fussy eaters, frogs and toads do have a couple of specific dietary requirements. 1) Anurans require live food to survive. They rely on their eyesight to distinguish predator from prey. Many species will only eat prey items that move. Frog owners should be prepared to keep live insects in their homes. If this thought makes you uncomfortable, then a frog may not be the pet for you! 2) Anurans require appropriately sized foods. Frogs and toads are unable to chew their food or tear off chunks. Items that are too large for your frog can cause health complications or even death. This information is especially important when feeding your frog live foods, as the tables are easily turned! Some insects – such as diving beetles and their larvae – will happily make a meal out of small frogs and tadpoles. Tadpole diets can also differ significantly from those of adults. We will discuss the specific diets of tadpoles in the next section. What Do Tadpoles Eat? Tadpoles – also known as “larvae” – usually have a different diet to adult frogs. Wild tadpoles tend to be more specialized than adult frogs in terms of diet. There are many general categories that species can fall into. Some are herbivores, feeding primarily on algae. Herbivorous tadpoles may filter algae from the water column or graze underwater surfaces. Others are omnivores, feeding on small animals as well as algae. Many tadpoles are said to be detritivores, meaning that they feed on decomposing organic matter. Tadpoles can also be voracious predators, feeding on small animals and their eggs. Larvae of some species can even be cannibalistic! Fun Fact: Tadpole intestines are shaped like a long, winding coil. As they mature into frogs, their gut changes and becomes adapted for digesting large, meaty prey. In some species, the gut can shorten by as much as 75%! In the egg, tadpoles rely on nutrients from yolk. These nutrients are enough to sustain young tadpoles for a few days after hatching. Once they’re ready to feed, most captive tadpoles will eat specially formulated pellets. Each tadpole will require around one pellet per day for the first four weeks or so. Tadpoles will start to undergo metamorphosis after around one month. Once their legs begin to emerge, they will require less food. Two or three pellets per week are usually sufficient. Eventually, your tadpoles will lose their tails and become fully-fledged frogs! This transformation means that it’s time to ditch the pellets, as your frogs make their switch to whole, adult foods. Learn all about the life cycle of a frog here. In the wild, frogs can eat hundreds – even thousands – of prey per day! It would be impossible to replicate this diverse diet in captivity. Keepers must instead focus on quality over quantity. Many “feeder” species are appropriate for adult amphibians in captivity. Each provides a slightly different balance of nutrients. It’s important to understand the nutritional needs of your frog and which feeder species will satisfy them. In the next few sections, we’ll help you to understand frog nutrition. We’ll run through all of the major nutrient requirements. We’ll also evaluate which feeder insects provide the highest amounts of specific nutrients. Macronutrients are the nutrients required in the largest quantities by animals. These are proteins, fats (lipids), and carbohydrates. They provide energy and help to maintain the functioning of organ systems throughout the body. Protein is an essential nutrient for all animals. It’s primarily used by the body to build and maintain muscle. It also provides energy and supports organ function. As carnivores, frogs tend to have naturally protein-rich diets. Insects usually contain 30-60% protein by weight. The diet of an insectivorous frog should also consist of around 30-60% protein. Most keepers don’t have to worry about their amphibians falling short on protein. Just make sure to choose appropriate foods for your species. Amphibians – like humans – require fats (lipids) and fatty acids in their diet. Fats and fatty acids provide the building blocks for important chemicals (such as hormones) and are an integral source of energy. They also aid in vitamin absorption and provide cushioning for the internal organs. Insects consist of around 10-30% fat by weight. Unlike birds and mammals, insects contain primarily unsaturated fats. These are particularly important for cholesterol regulation and maintaining cell structure. An excess of lipids in your frog’s diet can lead to obesity. Obesity is a common problem among new frog owners. Expert Tip: Some feeder species – particularly cockroaches – can lack certain essential fatty acids. A diverse diet is vital to keeping your frog healthy. Frogs rely less on carbohydrates for energy (relative to humans) and more on protein and fat. One type of carbohydrate does have an essential role in the amphibian digestive system. The main carbohydrate found in most invertebrates is chitin (the “crunchy stuff” that makes up their exoskeleton). Some frogs and toads may digest chitin for energy. More importantly, it serves as “fiber” – aiding digestion by moving other material through the gut. Too much fiber – of any kind- can be problematic for amphibians. A diet too rich in fiber can lead to intestinal blockage. Vitamins and Minerals Animals require small amounts of vitamins and minerals (in addition to larger quantities of macronutrients). These nutrients are just as crucial for long-term health, but only need to make up a small part of the diet. Some vitamins and minerals are obtained from insect “ash.” Ash is a term used to describe all parts of the insect that aren’t protein, fat, or fiber. Ash provides some of the necessary nutrients (aside from macronutrients) required by your frog. It’s still not enough to sustain captive frogs entirely. All owners will need to supplement their frogs’ diet to keep them healthy. You can do this in one of two ways: 1) Gut loading – Feeding feeder insects nutrient-rich food or supplement immediately before feeding them to your frog. The contents of their gut will be digested by the frog when eaten, providing a nutrient boost. 2) Dusting – Coating feeder insects with nutritional supplements before feeding them to your frog. Place insects in a container with a small amount of supplement powder and give it a quick shake to coat. Calcium and Phosphorous Calcium and phosphorus are the primary nutrients lacking from captive amphibian diets. Both are essential minerals for amphibian and reptile health. Healthy calcium and phosphorus levels ensure proper nervous system functioning and bone growth. It’s vital to manage your frog’s ratio of calcium to phosphorus, as excess phosphorus can interfere with calcium uptake. If phosphorus builds up to dangerous levels, calcium levels will, in turn, drop. Low calcium is a major cause of metabolic bone disease (MBD). MBD is a condition in which bones become weak and brittle due to a lack of calcium. In amphibians, MBD is most often a dietary issue, resulting from a lack of calcium or a low Ca:P ratio. To learn more about metabolic bone disease, read our article on MBD in bearded dragons. The ideal calcium to phosphorus ratio (Ca:P) is 1.5:1. When feeding frogs an insect-based diet, it can be challenging to achieve a healthy Ca:P. Most insects have a low Ca:P (meaning more calcium than phosphorus). Supplementation – in the form of calcium powder – is almost always necessary for captive frogs. As we mentioned earlier, there are several common “feeder” species. Each contains different amounts of essential macronutrients, along with key vitamins and minerals. Some feeder species can make up a large part of your frog’s diet. Others should only be an occasional treat. Of course, the size of your frog is also an important factor in feeder insect selection. Whatever you decide to feed your frog, a diverse diet is key to maintaining good health. Among the most common feeder insects are: - Fruit flies - Beetle larvae Earthworms and nightcrawlers are good sources of calcium for frogs. They have a high Ca:P ratio. You may collect worms from the wild or purchase them from reptile stores. Avoid using worms raised for fishing bait. These are often artificially scented or dyed using chemicals that are harmful to amphibians. Crickets are a prevalent and versatile source of food for amphibians and reptiles. There are many different varieties – including calcium-rich options and even “micro” crickets for tiny frogs. Most crickets contain around 18% protein and have a Ca:P of around 1:9. Springtails and fruit flies (or flightless varieties) are manageable options for smaller frogs. Fruit flies are a fairly nutrient-poor food without gut loading. Springtails are potentially higher in nutrients, though data on the subject is minimal. Caterpillars, such as silkworms and hornworms, are also excellent feeder species. They have high Ca:P ratios. Be mindful that they do also have lower protein contents than most other feeder options. There are many different roach species available on the feeder market. Death’s head, dubia, and discoid roaches are among the most common. Many species are too large for smaller amphibians. Discoid and dubia roaches, in particular, are incredibly nutritious. They possess high protein contents (around 20%) and a high (1:3) Ca:P ratio. Beetle larvae (or grubs) include mealworms, waxworms, butterworms, and superworms. Grubs are usually better offered as an occasional treat than a consistent food source. Many species are high in fat and phosphorus. This table of feeder nutrition facts is a valuable resource for frog owners looking to learn more about feeder nutrition. Keepers may offer mice as an occasional treat for some of the largest frog species. Rodents are high in fat and calories, so it’s vital to avoid overfeeding. Feeding Schedules: How Much and How Often? Feeding schedules and amounts vary based on the size and behavior of different species. Be sure to research the ideal feeding schedule for your chosen species before bringing one home. Generally speaking, most frogs receive a limited amount of food every other day or a few times per week. More active species, such as tree frogs, may require constant access to food. They will feed as they please, so make sure that there are always a few insects present in their environment. Sedentary species, like Pacman and Pixie frogs, prefer larger meals less often. These species are particularly prone to obesity. For specific There is no easy answer to the question of how much to feed your frog. Ideal amounts can vary based on age, size, and species. Again, it’s crucial to research ideal feeding protocols for your species and adjust these where necessary. If your frog begins to gain weight rapidly, it’s likely a sign that they should cut back on the grubs! Do Frogs Eat During Winter? Short answer: Yes. In the wild, some frog and toad species undergo a period of dormancy over winter. This behavior is known as brumation. The frog will usually wait out the colder months in an underground hollow or buried in an insulating substrate. Unlike hibernating animals, frogs may emerge periodically from brumation to feed. They do this when the ambient temperature reaches a comfortable level. Frogs will then return to their dormant state once temperatures drop. How to Feed Pet Frogs Feeding techniques vary by species. For many terrestrial species, live prey items should be dusted or gut-loaded and placed in the terrarium. Frogs are natural predators and will catch and eat the feeder insects over time. It can be helpful to use a “feeding station” to keep track of small prey items, such as springtails or fruit flies. A small piece of banana will attract feeder insects, making it easier to see how many the frog has consumed. Aquatic frogs – particularly African dwarf frogs – may need to be hand-fed. Use a turkey baster or long tongs to hand-feed these species. Hand feeding helps to ensure that food items manage to reach the frogs. Food competition with aquarium fish can be an issue with this slow-feeding species. You can find more information about African dwarf frogs in our comprehensive care guide. Frog Feeding Behaviour and Adaptations Many frogs have evolved specific adaptations to capture insect prey. Many frogs possess a long, sticky tongue attached at the front of their mouth. Their unique tongue helps them catch fast-moving insects. Others are “vacuum feeders,” sucking aquatic prey into their gaping mouths by creating a small vacuum in the water. Frogs’ eyes are uniquely adapted to help them find – and consume – prey in a couple of peculiar ways. First, the frog uses its large eyes to locate prey and assess its size. Many frogs – such as tree frogs – possess excellent night vision to find prey in the dark. Most frogs’ eyes are highly motion-sensitive. Still, they will only attempt to eat prey of appropriate size. Frogs may perceive moving objects that are too large to be eaten as a threat. This can trigger defensive behavior, such as the secretion of toxins. Once a frog has captured a prey item, its eyes reveal another purpose. Frogs possess specialized muscles to lower their eyes to the roof of their mouth. This aids in the swallowing of large prey by helping to push items down the throat. And in case you’re wondering, yes, frogs do have teeth! They just don’t use them to eat in the same way most other animals do. How Do Frogs Drink? Like all living organisms, frogs need water to survive. Most frogs don’t “drink” as we do. For the most part, frogs absorb water through their permeable skin. Frogs possess a “pelvic patch” – located on their belly and thighs. The pelvic patch is specifically adapted to absorb water. Some terrestrial frogs have adapted to use their patch to absorb water from moist soils. Frogs inhabiting drier climates also possess a waxy coating on their skin to reduce water loss. Why Do Frogs Eat Their Skin? Frogs shed their skin – or molt – regularly as they grow. Their skin contains a myriad of proteins and other nutrients. Frogs eat their skins to conserve these nutrients. More Frog Husbandry Information Frog nutrition involves much more than just the food they eat. Also, like all animals, frogs do poop. And they produce HUGE poops. Curious? As ectotherms, their metabolism – and ability to digest foods – can vary with temperature. Frogs may also struggle to eat if they are under stress. It’s vital to have a proper understanding of correct husbandry practices before bringing home any amphibian or reptile. Read on to learn on how to put that best to use, for example with a White’s tree frog care sheet – or directly go and see how a White’s Tree Frog’s habitat should be created.
COVID-19 pandemic has caused a major economic impact all over the globe and has resulted in disruption of “normal” life. Countries across the world are battling to find solutions to this disease that includes strengthening the immune system and developing vaccines to combat the pandemic. In this context, use of Hyperbaric Oxygen Therapy (HBOT) may seem to hold a promise for the treatment of severe cases of COVID-19. HBOT involves delivering oxygen to the body’s tissues at higher pressures than atmospheric pressure with the hope of reducing inflammation and revival of cells thereby improving the immune system The COVID-19 pandemic has thrown life out of gear in almost the entire world. Scientists and researchers across the globe are in race against time to develop a cure for this disease that has affected millions and resulted in hospitalization and deaths of thousands of people, especially those above the age of 70 and having comorbidities such as diabetes, asthma and cardiovascular disease. A number of anti-viral medications to combat COVID-19 have been tried to stop viral replication along with lifestyle changes such as wearing a mask and maintaining social distancing to prevent community spread. Recently, a number of different type of vaccines (1-3) have been approved for emergency use authorization by governments in various countries that will hopefully help in developing and providing immunity against COVID-19 for a long term. The idea behind these is to strengthen the immune system to help the body fight infections. Hyperbaric Oxygen Therapy (HBOT) can also be looked at as a potential treatment for treatment of severe cases of COVID-19, especially those that require hospitalization. HBOT involves delivering 100% oxygen to the body tissues at high pressures (higher than the atmospheric pressure). This hyperoxic condition results in delivering higher amounts of oxygen to body’s cells thereby improving their revival and survival. HBOT has been reported almost four centuries ago, however, has not been implemented as a definitive treatment due to lack of scientific evidence. However, recent preliminary data from clinical trials suggest significant improvements with respect to morbidity and mortality in severe cases of COVID-19 patients when treated with 100% oxygen at high atmospheric pressures. A small single centre trial carried out in USA on 20 COVID-19 patients and 60 matched controls using HBOT gave encouraging results with respect to in patient mortality and ventilator requirement (4). Another randomised controlled trial has been planned to investigate effects of normobaric oxygen therapy (NBOT) versus hyperbaric oxygen therapy (HBOT) for severe cases of hypoxic COVID-19 patients (5). The advantage of HBOT is that it is a non-invasive technique that is cost effective compared to other treatment regimens. However, care should be taken that it needs to be administered by trained personnel and should noy be carried out at home under normobaric conditions using pure oxygen cylinders available in the market. While HBOT promises to be a low-risk intervention for the treatment of severe cases of COVID-19, it will require a large number of randomised controlled clinical trials with a significant number of patients resulting in a strong positive outcome, before the therapy can be approved beyond a reasonable doubt. - Prasad U., 2021. Types of COVID-19 Vaccines in Vogue: Could There be Something Amiss? Scientific European January 2021. DOI: https://doi.org/10.29198/scieu/210101 - Prasad U., 2020. COVID-19 mRNA Vaccine: A Milestone in Science and a Game Changer in Medicine. Scientific European December 2020. Available online at https://www.scientificeuropean.co.uk/covid-19-mrna-vaccine-a-milestone-in-science-and-a-game-changer-in-medicine/ Accessed on 24 January 2021. - Prasad U., 2021. DNA Vaccine Against SARS-COV-2: A Brief Update. Scientific European. Posted 15 January 2021. Available online at https://www.scientificeuropean.co.uk/dna-vaccine-against-sars-cov-2-a-brief-update/ Accessed on 24 January 2021. - Gorenstein SA, Castellano ML, et al 2020. Hyperbaric oxygen therapy for COVID-19 patients with respiratory distress: treated cases versus propensity-matched controls. Undersea Hyperb Med. 2020 Third-Quarter;47(3):405-413. PMID: 32931666. Available online at https://pubmed.ncbi.nlm.nih.gov/32931666/ Accessed on 24 January 2021. - Boet S., Katznelson R., et al., 2021. Protocol for a multicentre randomized controlled trial of normobaric versus hyperbaric oxygen therapy for hypoxemic COVID-19 patients Preprint medRxiv. Posted July 16, 2020. DOI: https://doi.org/10.1101/2020.07.15.20154609
Norovirus Frequently Asked Questions (FAQ) Norovirus is a highly contagious virus that causes vomiting and diarrhea. It is sometimes referred as the “stomach flu”, but it does not have any relation to the influenza virus. The most common symptoms of norovirus are: - Stomach pain Typically, symptoms start to develop 12-48 hours after exposure to norovirus. Norovirus is spread by ingesting virus from the vomit or fecal matter of an infected individual. This usually happens by: - Eating food or drinking liquids that are contaminated with norovirus. - Touching surfaces or objects contaminated with norovirus and then putting your hands in your mouth. - Having close contact with an infected person i.e. caring for or sharing food/utensils with an infected person. Norovirus quickly spreads in enclosed places like daycares, nursing homes, schools, and cruise ships. It is possible to have norovirus in your stool before you even have symptoms. Additionally, you can continue to shed the virus in your feces for two weeks or more after you feel better. There is no specific treatment for norovirus, but it is recommended to drink plenty of fluids to prevent dehydration. Most people recover from norovirus within 1 to 3 days. Symptoms of dehydration include: - Dry mouth and throat - Decreased urination - Dizziness when standing - Properly wash hands with soap and warm water for at least 20 seconds after visiting the bathroom, changing diapers, and always before eating or preparing food. - Hand sanitizers should not be used as a substitute for proper hand washing. - Cook oysters and other shellfish thoroughly at temperatures above 140°F/60°C. - When you are sick, do not prepare food or provide care for others till at least two days after you recover. - Clean and disinfect surfaces contaminated with vomit or diarrhea with a bleach-based cleaner. - Quickly machine wash and dry laundry soiled with vomit or diarrhea. California Department of Public Health: Centers for Disease Control and Prevention: For questions contact the Epidemiology Department at (562) 570-4302
Today there are still many questions about the Bubonic Plague. Experts have yet to agree on the real culprit behind the human plague outbreaks, but whether people get the disease from fleas, infected animals, or from one another, once the disease becomes airborne, it can generate a public health crisis. A French biologist by the name of Alexandre Yersin, discovered the germ at the end of the 19th century. Today, scientists understand that the Bubonic Plague. The disease is spread by a bacteria called Yersina pestis (History. com Staff, 2010). With the numerous outbreaks reported in history, could anything really have be done to prevent the Bubonic Plague?The Bubonic Plague started in Europe around 1350. Many people believe the Plague started from infected animals like rats and small rodents. Even though this may be true, people could have prevented the severity of the outbreaks. One of the most common ones being staying away from all human contact. Once you are away from human contact, wash in extremely hot water, change into clean clothes, and burn the clothes you traveled in. Also by keeping a minimum distance of 25 feet from any other human being to avoid catching any pneumonic form spread through breathing and sneezing (Snell, 2014). Another way many people believe the Bubonic Plague started is the lack of sanitation and cleanliness. This could have also been an easy prevention tactic to aid in preventing the Bubonic Plague. Many people during the early 1300’s would wear the same clothes for many days without changing. You can also use plenty of mint or pennyroyal to discourage fleas. Also, by bathing in hot water as frequently as you can. Many people could have also built fires and stood as close to it as they could’ve(Snell, 2014).The other most common way the Plague was spread was by air. Once the Plague become airborne there was no stopping the disease. The Plague was extremely dangerous. Only being within two yards of each other was all that was needed to transmit the infection among humans or animals(Filip, 2014). The only that would have been helpful to prevent the Plague in this time would have been stay home. If you did have to go outside you could have burnt your clothes. Also, you could have stayed where you was until six months after the most recent nearby outbreak.In conclusion, in whatever way the Bubonic Plague was started, there wouldn’t have always of been a way to prevent the disease. Once the disease becomes airborne there is no stopping this infection. But there are also many prevention techniques could have been used to help reduce the number of Plague cases. For example, burning your clothing and bathing in very hot water after being out and around people. This would help to reduce the chances of you and your family from contracting the deadly Bubonic Plague. In today’s world the Plague is still around, so have we really done everything to prevent another outbreak of the deadly Bubonic Plague?
If someone were to read this article to you, the sentence “John had to bale out of his convertible because a bail of hay was about to fall on it” would sound perfectly fine. Even if you were to read that sentence yourself, it’s possible you could miss the two spelling mistakes in it. This is because bail and bale are homophones—they sound the same, and you can usually figure out their meanings from the context in which they are used. But when it comes to writing them, people are very often confused about which spelling should be used for which meaning. In the sentence where John was about to learn that hay fever isn’t the worst thing hay can do to you, we’ve used bail and bale incorrectly. Or did we? Here are the most common uses of the words: Bale is a large bound stack of material, such as hay or leather; Bail is the security deposit that’s paid if someone who’s been temporarily released from jail pending a trial doesn’t appear in court. While bale and bail don’t share many of their meanings, they do overlap in one sense—the phrasal verb “bail out.” If you wanted to say that someone had to jump out of an airplane using a parachute, run from a dangerous situation, or help someone in need, you could write “bale out” instead of “bail out” if you were using British English. Bale and How to Use It The word bale, as we know and use it today, is what you get when you take a large quantity of material, such as hay, and bind it together: One day Samuel strained his back lifting a bale of hay, and it hurt his feelings more than his back, for he could not imagine a life in which Sam Hamilton was not privileged to lift a bale of hay. —John Steinbeck, East of Eden The verb we use for the process of binding material to create a bale is also bale: Typically, knives need sharpening every 500 to 1,000 bales, depending on the material baled, and it is important to sharpen them correctly if they are to stay the course. —The Irish Independent In the past, bale was also a noun that meant great evil, woe, or sorrow. The traces of that meaning can still be found today in the adjective baleful: For the enemy is not Troll, nor is it Dwarf, but it is the baleful, the malign, the cowardly, the vessels of hatred, those who do a bad thing and call it good . . . —Terry Pratchett, Thud! In British English, bale can be used instead of bail in the phrasal verbs “bail on” or “bail out:” Wünsche’s plane was hit and he baled out, surviving with burns and leg injuries and returning to active service the following year. —The Guardian Bail and How to Use It The most common use of the word bail as a noun has to do with temporary release from custody of a person who is accused of a crime, in which money can be deposited as a guarantee that the person will appear in trial: A Kenyan court granted bail on Thursday to a British business executive charged with the murder of a Kenyan woman, his lawyer said. —Reuters Bail can also be used as a verb in the same sense: Alleged Brit hacker Lauri Love, who is accused of compromising US government servers and faces extradition to America, has been bailed by a UK court. —The Register While bail has many other meanings, both as a verb and a noun, the one that’s most interesting is the one that overlaps with bale in British English. It is derived from the archaic use of the noun bail for a bucket that’s used to remove water from a ship. This is where we get the verb bail, which refers to removing water from a ship with a bucket or removing water in general: In that sense, the phrasal verb bail on is used when we want to say that we let someone down or skipped something: After one designer bailed on an assignment, I gave her a 3. —Fortune And we also use the verb bail out for jumping out of an airplane with a parachute in a dangerous situation, or generally for removing ourselves from tough situations, or helping others in tough situations: Brazil’s federal government plans to bail out the state of Rio de Janeiro with 2.9 billion reais ($849 million) as it struggles with a fiscal crisis less than two months before the Olympic Games begin. —The Wall Street Journal
What is a shunt? In this page … What is a shunt? A shunt is simply a device that diverts CSF around the obstructed pathways. This stops it accumulating and returns it to the bloodstream. It consists of a system of tubes with a valve to control the rate of drainage and prevent back-flow. It is inserted surgically so that the upper end is in a ventricle of the brain and the lower end leads into the abdomen (ventriculoperitoneal or VP shunt).The device is completely enclosed so that all of it is inside the body. The fluid which is drained into the abdomen passes from there into the bloodstream. Other drainage sites such as the outer lining of the lungs or the heart can also be used, although this is rarely done now. In most cases the shunts are intended to stay in place for life, although alterations or changing the shunt (called revisions) might become necessary from time to time. Are there any complications with this kind of treatment? Complications are usually caused either by blockage of the system, or by infection. They are only occasionally due to mechanical failure of the valve. The tube (or catheter) may become too short as the individual grows and an operation to lengthen it might be necessary. However, more recently, most VP shunts for babies and young children are fitted with an extra coil of tubing to allow for growth. Symptoms usually develop gradually. In some cases a blockage might be seen through a gradual deterioration in overall performance. Occasionally, symptoms are quite sudden and severe and may include headaches and vomiting. Various tests can be carried out to confirm the diagnosis. Medical advice should be sought if a shunt blockage is suspected. If symptoms worsen rapidly, specialist attention (preferably at your neurosurgical unit) should be sought. Shunt infections usually occur soon after the operation to insert the shunt. Symptoms vary depending on the route of drainage. In ventriculo-peritoneal shunts the symptoms will often resemble those of blockage. This is because when the shunt becomes infected, the lower catheter is very often sealed off by swollen tissue. There may be accompanying fever and abdominal pain or discomfort. Various tests can be carried out for shunt infections and medical advice should always be sought if an infection is suspected. Symptoms of over-drainage are similar to those of blockage. A severe headache, which is reduced when lying down, is a common symptom. What symptoms should be looked for? Whenever there is a possibility that hydrocephalus is causing problems, it is important to seek the correct help immediately. Possible signs of ACUTE shunt blockage or infection may include: - Photophobia (sensitivity to light) and other visual disturbances - Fits (seizures) Possible signs of CHRONIC shunt blockage may include: - General malaise - Visuo-perceptual problems - Behavioural changes - Decline in academic performance - Being just 'not right' from the carer's point of view How are shunt problems treated? Shunt blockages that are causing illness usually require an operation to replace or adjust the affected part of the shunt. Shunt infections are usually treated by removal of the whole shunt and a course of antibiotics before insertion of a new system. Modern approaches to antibiotic therapy mean that such treatment in most cases can be expected to succeed. Repeated shunt revisions (alterations or replacements) can be associated with increased difficulties in thinking and acting. When a shunt isn’t working properly there will be a change in the pressure around the brain that can cause additional damage to the brain’s tissues. Increased damage may result in increased thinking (cognitive) and behavioural difficulties. See Hydrocephalus and the brain for more information about the possible effects. Shunt alert card A ‘shunt alert card’ emphasises that, if the cardholder is showing signs similar to those which occur when there is shunt blockage or infection (see above), urgent assessment of shunt function should be carried out in a specialist neurosurgical unit, in order to eliminate shunt failure as a cause. SBH Scotland has Shunt Alert Cards. These should be carried at all times by people with hydrocephalus treated by a shunt (this will include some people with spina bifida). The cards are available from SBH Scotland, Helpline & information, Telephone 03455 211 300 or see the Contact us section.
NCERT solutions for class 8 Rational numbers Follow us on Tagged under: class 8 maths ncert solutions chapter 1,ncert solutions class 8 maths Rational Numbers,Rational Numbers class 8 ncert solutions Clip makes it super easy to turn any public video into a formative assessment activity in your classroom. Add multiple choice quizzes, questions and browse hundreds of approved, video lesson ideas for Clip Make YouTube one of your teaching aids - Works perfectly with lesson micro-teaching plans 1. Students enter a simple code 2. You play the video 3. The students comment 4. You review and reflect * Whiteboard required for teacher-paced activities With four apps, each designed around existing classroom activities, Spiral gives you the power to do formative assessment with anything you teach. Carry out a quickfire formative assessment to see what the whole class is thinking Create interactive presentations to spark creativity in class Student teams can create and share collaborative presentations from linked devices Turn any public video into a live chat with questions and quizzes
The Lunar Orbiter 2 spacecraft was designed primarily to photograph smooth areas of the lunar surface for selection and verification of safe landing sites for the Surveyor and Apollo missions. It was also equipped to collect selenodetic, radiation intensity, and micrometeoroid impact data. The spacecraft was placed in a cislunar trajectory and injected into an elliptical near-equatorial lunar orbit for data acquisition after 92.5 hours flight time. The initial orbit was 196 by 1,850 kilometres (122 1,150 mi) at an inclination of 11.8 degrees. The perilune was lowered to 49.7 kilometres (30.9 mi) five days later after 33 orbits. A failure of the amplifier on the final day of readout, December 7, resulted in the loss of six photographs. On December 8, 1966 the inclination was altered to 17.5 degrees to provide new data on lunar gravity. The spacecraft acquired photographic data from November 18 to 25, 1966, and readout occurred through December 7, 1966. A total of 609 high resolution and 208 medium resolution frames were returned, most of excellent quality with resolutions down to 1 metre (3 ft 3 in). These included a spectacular oblique picture of Copernicus crater, which was dubbed by the news media as one of the great pictures of the century. Accurate data were acquired from all other experiments throughout the mission. Three micrometeorite impacts were recorded. The spacecraft was used for tracking purposes until it impacted upon the lunar surface on command at 3.0 degrees N latitude, 119.1 degrees E longitude (selenographic coordinates) on October 11, 1967.
This worksheet was made to accompany the Super Simple version of the traditional cumulative song There’s A Hole In The Bottom Of The Sea in ESL / EFL classes with young learners. The worksheet focuses on the following vocabulary: Instructions for the students: cut out the dominoes and paste them in the correct order! Get some more free printables for this song and create great ESL / EFL classes with young learners!Download
Space flight produces an extreme environment with unique stressors, but little is known about how our body responds to these stresses. While there are many intractable limitations for in-flight space research, some can be overcome by utilizing gene knockout-disease model mice. Here, we report how deletion of Nrf2, a master regulator of stress defense pathways, affects the health of mice transported for a stay in the International Space Station (ISS). After 31 days in the ISS, all flight mice returned safely to Earth. Transcriptome and metabolome analyses revealed that the stresses of space travel evoked ageing-like changes of plasma metabolites and activated the Nrf2 signaling pathway. Especially, Nrf2 was found to be important for maintaining homeostasis of white adipose tissues. This study opens approaches for future space research utilizing murine gene knockout-disease models, and provides insights into mitigating space-induced stresses that limit the further exploration of space by humans. During space flight, astronauts experience harsh environments, including microgravity and high-dose cosmic radiation, which affect the homeostasis of physiological systems in our body1. To investigate how space flight affects health of animals, space mouse experiments have been conducted exploiting the International Space Station (ISS) and other opportunities2,3,4,5,6,7,8,9,10. However, it has been difficult to attain live return of mice from space or to even realize “space travel” of mice. For instance, the Italian mice drawer system housed six male mice individually, but more than half of the animals died during habitation in space8,9,10. Many of the other preceding space mouse projects did not attempt live return of the mice from space. To achieve live-return of space mice, the Japan Aerospace Exploration Agency (JAXA) recently established a fully-equipped mouse experimental system for space flight11. It comprises mouse habitat cage units (HCU), transportation cage units (TCU), and a centrifuge-equipped biological experiment facility (CBEF). These apparatuses in combination can house mice individually and realize artificial gravity experiments in space. In the first and second missions utilizing this equipment, referred to as Mouse Habitat Unit-1 and -2 (MHU-1 and MHU-2 conducted in 2016 and 2017, respectively), 12 wild-type (WT) male mice were successfully launched each time, and all the mice returned safely to Earth after approximately 1 month stays in the ISS11,12. One salient finding in these missions is that ageing phenotypes, such as reduction of bone density and muscle mass, were markedly accelerated during space flight; notably, these phenotypes could be prevented by housing in space with artificial gravity. It has been shown that various environmental stresses, including oxidative and toxic chemical (often electrophilic) stresses, activate Nrf2 and downstream signaling pathways13. Nrf2 is the master transcription factor mitigating oxidative stress. Nrf2 induction is well-known to prevent various diseases, including cancer, diabetes, and inflammation13. As cosmic radiation induces oxidative stress1 and mechanical stresses can also induce Nrf2 activity14, we hypothesized that space stresses may activate the Nrf2 signaling pathway allowing, in turn, for Nrf2 to play important roles in regulating adaptive responses to these space stresses. Although one preliminary analysis exploiting astronaut’s hair samples during space flights showed that expression of NRF2 was decreased in their hair roots during space flight15, the experimental design harbored limitations. Therefore, we decided to address further studies that would provide direct lines of evidence on this point. In order to test the hypothesis that Nrf2 contributes to the adaptive maintenance of animal homeostasis in response to the stresses of space flight, it was critical to establish space travel of Nrf2 gene-knockout model mice16. To do this, however, we needed to overcome various challenges, both technical and regulatory, as space travel of gene-modified disease-model mouse lines coupled with a need for live-return had never been conducted. To this end, we decided to utilize the MHU technologies and proposed in 2016 an experiment for JAXA to send Nrf2-knockout (Nrf2-KO) mice to space. In this regard, metabolite concentrations of body fluids are considered as quantitative traits that can describe a real-time snapshot of physiological state of animals17,18. However, little is understood about the response of plasma metabolites to the space stress. Due to the constraints of space experiments, there had been no attempt that focused on changes of metabolites in onboard and post-flight rodent blood samples. Recently, metabolome technologies based on mass-spectrometry have achieved the sensitivity required for analyses of very small amounts of blood19. Moreover, metabolite profiling by nuclear magnetic resonance (NMR) has also become a precise and reproducible method for biomarker discovery17,18. To elucidate roles that Nrf2 plays in regulating adaptive responses to space stresses, in this study we conducted the one month-long space travel of six Nrf2-KO mice and six WT mice. After a 31-day stay in the ISS in 2018, all of the flight mice returned safely to Earth. Using metabolomic as well as transcriptomic, histological, morphometric, and behavioral methodologies, we found that Nrf2 signaling was indeed activated by space stresses in various tissues. Space stress and Nrf2-deficiency brought about changes in gene expression and plasma metabolite profiles independently for the most part, but cooperatively in certain situations. In particular, Nrf2 is important for the weight-gain of space mice and maintenance of white adipose tissue homeostasis in response to the stresses of month-long space travel. Outline of MHU-3 mouse project utilizing Nrf2-KO mice To study contributions of Nrf2 to the protection of mice against the stresses of travel to and from space, and maintenance of homeostasis during their space stay, we have conducted the MHU-3 project. Male Nrf2-KO mice16 and WT mice were bred and selected for space travel. For this purpose, 60 WT and 60 Nrf2-KO mice at 8-week-old were delivered to the Kennedy Space Center (KSC) 3 weeks prior to launch. These mice were acclimatized to individual housing cages. After acclimation, we selected 12 mice for flight to the ISS based upon their body weight, levels of food consumption and water intake and the phenotypic absence of a hepatic shunt (see “Methods”). SpaceX Falcon 14 rocket (SpX14) containing the mice in the TCU within the Dragon capsule was launched on April 2, 2018 (GMT) from KSC (Fig. 1a). After arrival at the ISS, mice were relocated to the HCU by the crew. We used an HCU that accommodates one mouse per cage11,12. Before mice were returned to Earth, they were transferred back into the TCU and loaded into the Dragon capsule which subsequently splashed down in the Pacific Ocean offshore from Southern California on May 5. The TCU was unloaded from the capsule and transported to Long Beach Port on May 7. All mice were alive upon return and were then euthanized and dissected at a laboratory to collect tissues after a general health assessment and a series of behavioral tests. A ground control (GC) experiment precisely simulated the space experiment was conducted at JAXA Tsukuba in Japan from September 17 to October 20, 2018. Six WT and six Nrf2-KO mice were individually housed in the same units as the flight experiment (FL). Both the TCU and HCU were placed in an air-conditioned room with a 12-h light/dark cycle. Fan-generated airflow (0.2 m/s) inside the HCU maintained the same conditions as in the FL setting. During the onboard habitation, the health conditions of each mouse was monitored daily by veterinarians on the ground via downlinked videos (Yumoto et al., in preparation). Representative images of all 12 mice in the HCU and their onboard movies are shown in Fig. 1b and Supplementary movie 1, respectively. During the flight mission, temperature, and humidity were well controlled, and concentrations of carbon dioxide and ammonia were maintained at low levels11. The absorbed dose rate of radiation was 0.29 mGy/day during the flight. A new device was developed to collect peripheral blood from the tail in space (Fig. 1c). We obtained approximately 40-μL blood from each mouse with minimal hemolysis (Fig. 1d). Blood collections from tail veins were performed the same way as in the space before launch and after return to Earth, as well as in the GC experiment. Initial inspection of mice returned to Earth showed loss of balance and enfeebled muscle power (Supplementary Movie 2). The Nrf2-KO flight mouse No. 4 (FL04) developed severe intestinal hemorrhage during the return flight for an unknown reason. Data from FL04 were deemed outliers for many measurements; therefore, data for FL04 were omitted for most of the analyses. Space stresses activate the Nrf2 signaling pathway To examine whether space stresses activated Nrf2 and downstream pathways, we conducted a wide-ranging RNA-sequence analysis. For this purpose, we dissected the space mice soon after their return to Earth and prepared RNA samples from many tissues, including temporal bone (TpB), mandible bone (MdB), spleen (Spl), liver (Liv), epididymal white adipose tissue (eWAT), inter-scapular brown adipose tissue (iBAT), thymus (Thy), kidney (Kid), and brain cerebrum (Cbr) (Fig. 2a). We selected 26 genes that encode enzymes for detoxication and antioxidative responses that are well-known Nrf2 target genes20. The expression of these typical Nrf2 target genes was upregulated widely in various tissues of the FL_WT mice compared with those in GC_WT mouse tissues (Fig. 2b). In contrast, expression of these 26 selected genes were markedly lower in tissues of the GC and FL Nrf2-KO mice (Fig. 2b), indicating that the upregulation of these genes was attributable to the functional presence of Nrf2 signaling. By comparing FL_KO with GC_KO, we examined all the gene expression changes induced by space flight in the WT mice. The results of gene set enrichment analysis (GSEA) suggest that a number of pathways are affected by space flight (Supplementary Table 1). The heatmap analyses revealed that space flight-induced changes of gene expression in various tissues could be classified into a Nrf2-dependent group and a Nrf2-independent group (Supplementary Fig. 1a–e). Of note, Nrf2-dependent space-induced genes include typical Nrf2 target genes. Closer inspection of the extent of gene expression reduction revealed that the response was weaker in FL_KO mice than in GC_KO mice. We surmise that that activation of other stress response pathway(s) by the strong space stresses might alter the magnitude of repression (or loss of constitutive expression) elicited by the Nrf2 gene knockout in the space flight mouse tissues. However, it should be noted that the Nrf2 pathway contribution is the strongest among the regulatory pathways for the expression of these genes, as reductions of expression were substantial in both GC and FL Nrf2-KO mice compared with both GC and FL WT mice. We extended these heatmaps to direct measurements, and observed significant induction of transcriptional expression of Nqo1 and Hmox1 in thymus from FL_WT mice (Fig. 2c). In very good agreement with the heatmaps, in some of the tissues of FL_KO mice, the magnitudes of lower expression were dampened, but still the space-flight induced increase of Nrf2-target gene transcripts were largely canceled in the knockout mice. These results unequivocally demonstrate that Nrf2 activity is indeed induced during space flight and enhances the expression of cytoprotective genes, strongly arguing that our body exploits the Nrf2 signaling pathway to counteract space stresses (Fig. 2d). Space flight and Nrf2-deficiency influenced gene expression To further clarify contributions of Nrf2 to gene expression patterns of the space mouse, we performed principal component analyses (PCA) of the transcriptomic results. We utilized all identified transcripts for this analysis. Results of the PCA revealed that space travel and Nrf2-deficiency both influenced gene expression profiles, resulting in four distinct patterns. In thymus and eWAT, PC1 separated FL vs. GC, while PC2 separated Nrf2-KO vs. WT (Fig. 3a, b). By contrast, in liver and spleen, PC1 separated Nrf2-KO vs. WT, while PC2 separated FL vs. GC (Fig. 3c, d). These results delineate the first and second patterns in which both space stresses and Nrf2-deficiency strongly and independently elicited specific changes in gene-expression profiles. A third pattern was found in iBAT in which PC1 separated FL vs. GC, while PC2 did not separate the other groups at all (Fig. 3e), indicating that space travel influenced the gene expression in this tissue while Nrf2-deficiency did not. We found a fourth pattern in Cbr. Somewhat to our surprise, there was no clear separation of gene expression patterns by PCA in Cbr (Fig. 3f), indicating that neither space stresses nor Nrf2-deficiency affected gene expression in this tissue when analyzed en bloc. We surmise that this pattern reflected the cell heterogeneity of Cbr. Collectively, these four PCA patterns of transcriptome analyses demonstrated that, in most of the cases, space stresses influenced gene expression differently than Nrf2-deficiency. No acceleration of bone/muscle degeneration in Nrf2-KO mice in space Recent MHU1 and MHU2 reports revealed that ageing phenotypes, such as reduction of bone density and muscle mass, were accelerated during the month-long space flights11,21. Our results nicely confirmed these observations. Micro-computed-tomography images clearly showed decreased bone density of FL_WT and FL_KO mouse bones compared with GC_WT and GC_KO mouse bones (Supplementary Fig. 2a, b). Against our expectation, Nrf2-gene knockout did not accelerate the decrease of bone mineral density (BMD) during space travel. Muscle mass of soleus (Supplementary Fig. 2c) and gastrocnemius muscles (Supplementary Fig. 2d) also showed substantial decreases during space travel, and again loss-of-Nrf2 did not accelerate nor decelerate these declines. These results indicate that Nrf2-deficiency did not influence substantially the progression of these ageing phenotypes in space. We envisage that the space stress-originated phenotypes of bone and muscle in the mice are very strong, whereas baseline expression levels and impact of Nrf2 are low, so that any possible Nrf2 contribution could not be visible in this context. Space flight induces ageing-like changes The observation that there was no apparent acceleration of bone and muscle changes during space flight of Nrf2-KO mice compared to FL_WT led us to examine changes in plasma metabolites associated with ageing. Herein, we addressed whether ageing-related changes of metabolites were accelerated by the deficiency of Nrf2-regulated cytoprotective systems. To this end, we collected blood plasma from the mouse inferior vena cava soon after the return of the mice and conducted NMR-based metabolome analyses (Fig. 4a). Whereas PCA of the metabolome results did not show strong separations compared to that of transcriptome, closer inspection indicated that PC1 separated FL_KO vs. FL_WT (Fig. 4b), suggesting that both space flight and Nrf2-deficiency contributed to changes in plasma metabolites. We then searched for plasma metabolites, which changed only by space flight or changed by both space flight and Nrf2-deficiency. Of all 40 metabolites examined by NMR-based metabolome analyses (Fig. 4c–e, Supplementary Figs. 3 and 4), we identified three metabolites of interest; i.e., glycerol, glycine, and succinate (Fig. 4c–e). Plasma glycerol levels were increased by flight in both WT and Nrf2-KO mice compared to respective GC mice and with little influence of Nrf2-deficiency (Fig. 4c). Plasma levels of glycine (Fig. 4d) and succinate (Fig. 4e) in FL_WT mice were much lower than GC_WT mice. Glycine and succinate levels in GC_KO mice were also much lower than those in GC_WT mice, and levels did not decrease further with space travel. We additionally found that plasma levels of glutamine, carnitine and formate were changed significantly by the space flight (Supplementary Fig. 3). These results imply that Nrf2-deficiency itself evoked similar changes to those provoked by space stresses. An intriguing hypothesis here was that these changes in metabolites recapitulated the changes in metabolites accompanied with ageing of humans. To address this hypothesis, we exploited human metabolome data accumulated in the Tohoku Medical Megabank (ToMMo) project22. We examined these metabolites in plasma from young age (20–40 years old) and old age (60–80 years old) participants in the population-based prospective cohort studies of ToMMo. We found that plasma glycerol level was increased in 60–80 age group (Fig. 4f), showing very good accord with the space mouse results. Plasma levels of glycine (Fig. 4g) and succinate (Fig. 4h) were lower in 60–80 age group than in 20–40 age group. These changes again showed very good agreement with those in the flight mice. Importantly, the latter two metabolites were also decreased in Nrf2-KO mice, supporting the notion that Nrf2 is important to decelerate the ageing of animals. In contrast, while plasma levels of glutamine, carnitine and formate were changed significantly by the space flight (Supplementary Fig. 3), the levels either changed moderately (glutamine and carnitine) or to a reversed-direction in the human ageing analysis (Supplementary Fig. 5). These results thus suggest that space stress induces ageing-like changes within a subset of plasma metabolites, and that Nrf2-deficiency also provokes similar ageing-like changes of plasma metabolites (Fig. 4i). In addition, we also examined whether space flight induced ageing-like changes in gene expression. GSEA using the gene set of aged mice (Enrichr) revealed that space-induced changes of gene expression were enriched in ageing changes in liver, TpB, BAT, and WAT (Supplementary Fig. 6). These results demonstrate that space stress induced ageing-like changes in metabolites and gene expression. Lack of body-weight-gain in Nrf2-KO mice during space flight Upon return to Earth, we examined the overall health status of FL and GC mice, conducted behavioral examinations, and, after necropsy, histological examinations of multiple tissues of the mice. One of the most obvious phenotypes we found in these examinations was the lack of body-weight-gain specifically in Nrf2-KO mice during the space flight (Fig. 5a). Since we launched 11-week-old mice, the mice were still gaining weight. In fact, FL_WT mice as well as both GC_WT and GC_KO mice gained body-weight almost to the same extent. We then examined weights of organs and tissues of these mice, including eWAT, iBAT, liver, spleen, lung, thymus, heart, testis, and kidney (Fig. 5b–d and Supplementary Fig. 7). We found that eWAT weight, as percent of body weight, was significantly increased by space fight, but this increase was canceled largely in the FL_KO mice (Fig. 5b). Importantly, this decrease of eWAT weight did not occur in GC_KO mice. Similarly, weights of iBAT were significantly increased in FL_WT mice. However, in contrast to the situation of eWAT, the increase was not canceled in the FL_KO mice (Fig. 5c). Showing stark contrast to these two adipose tissues, the liver weight proportional to body weight did not change much by either space flight or Nrf2-deficiency or both (Fig. 5d). We designed an apparatus to monitor food-intake and water-consumption of mice while in space (Fig. 5e). Importantly, there were no significant differences in food-intake and water-consumption between WT and Nrf2-KO mice, both in-flight and on the ground (Fig. 5f, g). These results demonstrate that during space flight Nrf2-deficiency results in the reduction of body weight without changing food-intake and water-consumption. Available lines of evidence suggest that this may be linked to the diminished weight-gain of abdominal adipose tissues. Therefore, to obtain further insight into the effects of space flight on lipid and glucose status in animal body, we analyzed the small amounts of blood plasma collected from the mouse tail during flight (after 18 days launch, L + 18) and 2 days after return to Earth (R + 2). To our best knowledge, this is the first analysis of blood obtained from mice in space. We conducted mass-spectrometry metabolome analyses of the plasma (Fig. 5h). While there was no difference in total cholesterol ester (CE) levels between WT and Nrf2-KO mice during flight (L + 18) (Fig. 5i), there was an elevation of total CE levels in FL_WT mice, but not Nrf2-KO mice after return to Earth (R + 2), where levels mirrored those of GC levels (Fig. 5j). While we do not have solid explanation for the increase of total CE level only in the WT mice after return to Earth (R + 2), it plausible that the decrease of eWAT in FL_KO mice observed after return to Earth (R + 2) (Fig. 5b) might be associated with the unchanged total CE level in the plasma of these mice. Similarly, total triglyceride (TG) levels did not change much between FL_WT and GC_WT mice, and also between FL_KO and GC_KO mice during flight (L + 18) (Fig. 5k). However, in contrast to the total CE level, the total TG level was significantly decreased in FL_WT mice upon return to Earth (R + 2) compared with GC_WT mice of R + 2 (Fig. 5l). This tendency was similar in GC and FL Nrf2-KO mice after return to Earth (R + 2) (Fig. 5l). Considering the elevated glycerol level in plasma (Fig. 4c), lipolysis from TG to glycerol was likely accelerated in flight mice in both WT and Nrf2-KO mice. In addition, our NMR metabolome analysis revealed that there was no significant change in plasma glucose levels (Supplementary Fig. 5). These results thus demonstrate that the space travel affects lipid metabolism in mice, and the changes in certain aspects of lipid metabolism were either reversed or exacerbated by the loss-of-Nrf2 function. Nrf2 is critical for maintenance of WAT homeostasis in space We then extended our mouse analyses to the histology of abdominal WAT. We observed that lipid droplet size of eWAT in GC Nrf2-KO mice was significantly larger than that of GC_WT mice (Fig. 6a) despite noting that weights of eWAT were comparable between these two GC groups (Fig. 5b). Surprisingly, space flight gave rise to a marked increase of lipid droplet size in WT mice and to some extent in Nrf2-KO mice (Fig. 6a). We measured sizes of lipid droplets of all mice, and found that these changes during the space flight were quite reproducible (Fig. 6b). We also measured the distribution of lipid droplet sizes and confirmed that space flight provoked increases of lipid droplet size in both WT and Nrf2-KO mice (Fig. 6c, solid lines) compared with two groups of GC mice (dotted lines). We calculated adipose cell number of eWAT utilizing these data, and found that the number of adipose cells in Nrf2-KO mouse eWAT was significantly lower than that in WT mouse eWAT (Fig. 6d), indicating that eWAT weight of Nrf2-KO mice on the ground is maintained by a complementary increase in droplet size. Intriguingly, lipid droplet size of FL_WT mouse eWAT became much larger following space flight than that of the GC mice. However, the lipid droplet size in eWAT of FL_KO mice was almost comparable with that of FL_WT mice (Fig. 6b), indicating that eWAT of the Nrf2-KO mouse did not become larger during space flight (Fig. 6e). Taken together, we propose that Nrf2 plays important roles in the maintenance of abdominal adipose tissue homeostasis, but that space stresses markedly affect this homeostasis regardless of the presence of Nrf2. In an associated observation, lipid droplet size and thickness of subcutaneous fat was found to be increased by space flight, although there was no significant difference between WT and Nrf2-KO mice (Supplementary Fig. 8). Furthermore, numbers of hepatic Oil Red O-positive lipid droplets were increased by space flight (Supplementary Fig. 9). Tissue weights and lipid droplet sizes of iBAT also increased through space flight in both WT and Nrf2-KO mice (Fig. 5c and Supplementary Fig. 10). These wide-ranging observations demonstrate that adipose tissues in the whole body became larger in response to space stresses, supporting the contention that Nrf2 is critical for the maintenance of homeostasis of abdominal WAT. Space flight and Nrf2-KO induce metabolic impairment in eWAT To elucidate how space flight and Nrf2-deficiency induce perturbations in eWAT, we examined the transcriptome. Analyses of eWAT-derived RNA revealed that mRNAs coding for genes involved in the respiratory chain (Fig. 7a) and fatty acid β-oxidation (Fig. 7b) were markedly reduced in eWAT from the flight mice of both genotypes compared with expression levels in eWAT from GC WT mice. In addition, similar but milder changes than those in the flight mice were observed in the GC_KO mice (Fig. 7a, b), indicating that space stresses induce metabolic impairment in eWAT, which is qualitatively similar, but much more severe, than those observed with Nrf2-deficiency mice on Earth. These results suggest that these reductions of mitochondrial activity might result in the enlargement of adipocytes in FL_WT and GC_KO mice. Further transcriptome analysis revealed that expression levels of diabetes-related chemokine genes23 were increased in eWAT from flight mice of both genotypes compared to GC mice (Fig. 7c). Consistent with this observation, expression levels of marker genes for macrophage (Cd68, Lgals3, and Adgre1) and angiogenesis (Kdr and Pecam1) were increased in eWAT from flight mice of both genotypes compared to GC mice (Fig. 7d). Since angiogenesis sustains inflammation by delivering oxygen and nutrients for inflammatory cells, and inflammation in turn can cause insulin resistance24, these results suggest that space stresses might induce adipose tissue growth by means of inflammation and angiogenesis. In order to gain insight as to how Nrf2 deficiency on Earth leads to the decrease in adipocyte number, we searched for changes in gene expression in eWAT from GC_KO mice. We found that expression levels of many PPARγ-target genes25 were downregulated in eWAT from GC_KO mice compared to GC_WT mice (Fig. 7e). Since PPARγ is important for adipocyte differentiation25, this downregulation of PPARγ in GC_KO mice might affect adipocyte differentiation, provoke reduction of adipocyte numbers and, in turn, give rise to compensatory enlargement of the adipocytes, thereby sustaining adipose mass of eWAT on Earth (Fig. 7f). The transcriptome analysis also revealed that expression of a number of PPARγ-target genes were changed in the eWAT of flight mice, showing similar profiles irrespective of the Nrf2 genotypes (Fig. 7e). The profile of flight mice showed remarkable differences from those of Nrf2 KO mice on Earth. These results thus indicate that space stresses elicit much stronger influences on the PPARγ-target gene expression, along with the respiratory chain, fatty acid β-oxidation, chemokine and macrophage-/angiogenesis-related gene expressions, than the Nrf2 knockout does. We surmise that these changes in the gene expression profiles are, at least in part, responsible for the marked enlargement of the adipose tissues during the space travel. Nrf2 is the key regulator of the adaptive response against various environmental and endogenous stresses13. Since oxidative stress activates the Nrf2 signaling pathway through stress-sensing mediated by the Nrf2 chaperone Keap1 (Kelch-like ECH-associated protein 1)26, cosmic radiation and/or microgravity during space travel may well activate Nrf2 through the generation of oxidative stress. Similarly, there is a possibility that mechanical stress caused by microgravity may contribute to Nrf2 activation during space travel. However, there has been limited direct examination as to whether these space stresses activate the Nrf2 signaling pathway and to what extent activation of the pathway may be protective against these stresses. Therefore, we conducted a space travel experiment utilizing Nrf2 knockout mice. To our best knowledge, this is the first attempt of space travel for gene-knockout disease model mice in which mice are returned safely to Earth. Comprehensive RNA sequencing analyses of various organs/tissues from the returned mice revealed that space stresses indeed have induced the Nrf2 signaling pathway in many tissues. Our metabolome analysis further revealed that the stresses during the space travel have induced ageing-related changes of some plasma metabolites. Another salient finding in the space travel of Nrf2 knockout mice is that Nrf2 is important for maintaining homeostasis of white adipose tissues. Collectively, these results thus unequivocally demonstrate the importance of the Nrf2 signaling pathway in responses to the stresses of space travel. Many technological advances contributed to the success of this space mission, including the mouse transport and habitat systems. Of direct application to the biomedical inquiries, we developed a new blood collection procedure that is minimally invasive and easy-for-use for astronauts. This apparatus is important since there is a limitation for the training of astronauts for specialized operations such as blood collection. Capitalizing on the development of the new device, we successfully collected small amounts (40 μL) of blood and plasma from mouse tails during the flight. Subsequent mass spectrometer-based metabolome analysis successfully detected plasma metabolites in the small volumes of plasma. This newly established procedure of blood collection from mice during space travel coupled with a highly sensitive metabolome analysis brings us a powerful approach to elucidate physiological and pathological changes of intermediary metabolism in gene-modified or other model mice during space travel. Indeed, the systematic metabolome analysis in this study revealed that space flight induced increases in glycerol and decreases in TG, implicating the occurrence of enhanced lipolysis in mice during the flight. It has been shown that lipolysis is stimulated by stress-induced catecholamines, including adrenaline and noradrenaline27, and an increase in circulating levels of catecholamines has been observed commonly in astronauts28. Therefore, we surmise that the enhanced lipolysis is the consequence of elevated catecholamine hormones. Alternatively, the enhanced lipolysis in mice might be a compensatory response to the increase in adipose mass occurring during space travel. In this regard, it is interesting to note that the increase of plasma glycerol shows significant association with human ageing as observed in the ToMMo cohort study. Now, verification of this relationship in prospective cohort studies becomes quite important and intriguing, since association of the glycerol increase and ageing has not been recognized heretofore. We also observed that decreased plasma glycine levels showed a marked association with ageing. In contrast to the situation for glycerol, the association of plasma glycine levels with ageing has been reported, such that dietary glycine supplementation extended the lifespan of rats29 and addition of glycine to the culture media restored the phenotype of aged cells back to that of young cells30. Taken together, our results and these reports indicate that space travel induces in mice an ageing phenotype associated with these plasma metabolites. Our finding that Nrf2 disruption impairs body-weight gain and WAT homeostasis in space mice during flight elicits several important considerations. First, it has been known that there is a specific regulatory single nucleotide polymorphism (rSNP) in the Nrf2 gene that downregulates the expression of Nrf2 and increases the risk of a few diseases in humans31 and mice32. Thus, a number of questions arise related to this rSNP. For instance, does presence of this rSNP in the NRF2 gene influence the health of future longer-term space travelers? Do minor allele homozygote mice of the rSNP retain exacerbated risk for body-weight reduction and/or perturbation of WAT homeostasis by long-term space flight or by ageing? Second, small molecule inducers of the Nrf2 signaling have been developed or are under development33,34. These NRF2 inducers can be taken as drugs or through foods and dietary supplements. We surmise that these NRF2 inducers may serve to mitigate some of the stresses associated with space travel. We have obtained substantive amounts of informative data on this MHU-3 space–flight study. However, we could analyze only limited sets of data in a timely manner. In fact, we feel that many changes were induced in a space–flight specific manner and/or Nrf2-knockout mouse specific manner that have not been recognized through our first-pass analyses. We therefore present our series of histological analyses as Supplementary Data (Supplementary Figs. 11–14). Further metabolomic analyses, blood analyses, kidney function analyses, behavioral analyses, and muscle analyses, will be published separately. Collectively, this study has pioneered the future of experiments utilizing gene-modified model mouse defective in adaptive responses to stresses associated with space travel. We believe that continuation of this series of space mouse studies will provide insightful information useful to overcome possible mission-limiting, space flight-derived stresses evoked by human space exploration. The MHU-3 project The HCU rev.1 is an onboard habitation cage that accommodates one mouse per cage11,12. The HCU is equipped with a food bar, watering system (two redundant water nozzles and a water balloon acting as power-free pressure source), an odor filter, two fans (for redundancy) for air ventilation, waste collecting equipment, an LED/IR video camera with a wiper inside the cage to keep the observation window clean. The health of each mouse was determined based on the conditions of eyes, ears, fur, and tail as observed in the transmitted videos. Paper sheets were mounted on the cage wall to quickly eliminate liquid, such as urine, from the cage. A photocatalytic thermal spray was applied to the sheets for deodorant and antibacterial effects. Air ventilation inside the cage was maintained by airflow (<0.2 m/s) generated by fans on the HCU. Differences in the volume of air ventilation and airflow rates between HCU in GC and microgravity conditions were negligible because the fans regulating ventilation in both gravity conditions were maintained at the same speed. The day/night cycle used 12-h intervals. Food and water were replenished once a week. Temperature, humidity, carbon dioxide, and ammonia were monitored precisely and recorded in logs. The TCU was used to transport mice aboard the SpaceX Dragon capsule during the launch and return phases, and was placed in a powered locker sized for ISS single cargo transfer bag. The TCU contains 12 cylindrical cages for housing mice individually. The TCU is equipped with a cylindrical food bar, watering system (a water nozzle and two water balloons), an odor filter, two fans (for redundancy), waste collecting equipment, and LEDs with lights for day/night cycle. Paper sheets mounted in the waste collection area were treated with photocatalytic thermal spray for deodorant and antibacterial effects, similar to the HCU. A temperature/humidity logger was attached on the TCU air inlet to monitor the environment during launch and reentry operations. Nrf2-KO (Nfe212tm1Ymk)16 and WT male mice in the C57BL/6J background were bred at Charles River Laboratories Japan for MHU3. All animal experiments were approved by the Institutional Animal Care and Use Committees of JAXA (protocol numbers 017-001 and 017-014), NASA (protocol number FLT-17-112), and Explora BioLabs (EB15-010C), and conducted according to the related guidelines and applicable laws of Japan and the United States of America. Pre-launch acclimation activities and animal selection Three weeks prior to launch, 60 WT and 60 Nrf2-KO mice (8 weeks old) were delivered from Charles River Laboratories Japan to KSC. These mice were acclimatized to the environment in individual housing cages (Small Mouse Isolator 10027, Lab Products) in an air-conditioned room (temperature: 23 ± 3 °C; humidity: 40–65%) with a 12:12-h light/dark cycle at the SSPF Science Annex at KSC. Three acclimation phases were set: Phase I, body weight recovery phase; Phase II, water nozzle acclimation phase and Phase III, flight food acclimation phase. After approximately three weeks of acclimation, the mice were ready for transport to the ISS and had reached the age of 12 weeks. Succinctly, at Phase I, mice were fed CRF-1 and given autoclaved tap water ad libitum using ball-type water nozzles. The bedding material consisted of paper chips (ALPHA-DRI). At Phase II, water nozzles are changed to flight nozzles. Finally, at Phase III, the food was changed to flight food. Fecal collection and body swabs were obtained, and these samples were sent to Charles River Laboratories in USA for pol-based SPF testing. As some of the Nrf2-KO mice harbor an intrahepatic shunt35, Nrf2-KO mice were challenged with 100 mg/kg ketamine and 5 mg/kg xylazine and recovery times were monitored at the age of 9 weeks to eliminate the mice harboring an intrahepatic shunt. Nrf2-KO mice with long sleeping duration were judged as mice without shunts. Blood was taken from tails of all 120 mice. During the acclimation phase, body weight and food/water consumption were measured and recorded. We selected 12 mice for flight and backup, respectively. The selection of flight candidate mice was based on body weight, food consumption, water intake, and absence of hepatic shunt. We also checked the health of each flight candidate mouse by evaluating the condition of their eyes, ears, teeth, fur, and tail. The five subgroups of flight candidates were necessary to support possible launch attempts in case of unexpected postponements. Onboard operations by ISS crew One day prior to launch, 12 mice were loaded into the TCU and made ready for launch. Mice were transported to the ISS by SpX14. After the Dragon vehicle of SpX14 berthed with the ISS, mice were transferred to the HCU by a crew member. Mouse-husbandry tasks included exchanging food cartridges, supplying water, collecting waste, and replacing odor filters. The block food (approximately 35 g) is contained in the cartridge, which supports 1-week habitation. The cartridges have windows and scale at their sides, therefore the remaining amount of the foods was estimated, via video without returning them to the ground, at weekly food cartridge exchange operations. After 31 days onboard the ISS, mice were transferred to the TCU for the return to Earth. Return phase and animal dissection After unberthing from the ISS, the Dragon vehicle splashed down in the Pacific Ocean off the coast of California. After a ship picked up the cargo, the returned TCU was transported to a port in Long Beach. JAXA received the TCU from NASA and transported them to Explora BioLabs in San Diego using an environmentally controlled van. After the health and body weights of mice were assessed, open field, light/dark transition and Y-maze tests were performed. Blood was collected from tail after the behavioral tests. Isoflurane-anesthetized mice were then euthanized by exsanguination and dissected for the collection of tissue samples. A GC experiment that simulated the space experiment was conducted at JAXA Tsukuba in Japan from September 17 to October 20, 2018. Six WT and six Nrf2-KO mice without the hepatic shunts were individually housed in the same way as for the flight experiment. Both the TCU and HCU were placed in an air-conditioned room (average temperature: 22.9 °C; average humidity: 48.4%) with a 12:12-h light/dark cycle. Fan-generated airflow (0.2 m/s) inside the HCU maintained the same conditions as in the flight experiment. The mice were fed CRF-1. The bedding material consisted of paper chips (ALPHA-DRI) in the acclimation cages, but was not used in the HCU. Feeder cartridges and water bottles were replaced once a week, and cages in the HCU were not replenished. After the health and body weights of mice were assessed, open field, light/dark transition and Y-maze tests were performed. Blood sampling from the tail was conducted as for the flight experiment. All mice were anesthetized by isoflurane inhalation and euthanized under anesthesia, and then dissected to collect tissue samples. Blood collection procedure During the flight experiment, blood samples were collected from the distal end of the tail with tail clippers (KAI, PQ3357). The tail clippers were mounted on a guard plate adjusted to 1 mm from the tip of the clippers, and all such tail clippers used in this experiment were inspected to confirm each tool’s ability to amputate less than 1 mm of the tail prior to launch. Each mouse was transferred from the HCU to the restrainer (Sanplatec, 99966-31), which was pre-warmed by disposable heat pads (Kiribai Chemical; New Hand Warmer Mini) to increase obtainable blood volume. The inner surface temperature of the restrainer was approximately 45 °C. After being disinfected, the tail was milked after being clipped by a tail clipper to collect a blood sample into capillary tubes (Drummond Scientific; 8-000-7520-H/5). To promote hemostasis, the tip of mouse tail was compressed for one minute with a hemostat (Ethicon, 15726). The blood samples collected in capillary tubes were centrifuged and immediately frozen in a −80 °C-freezer without being snap-frozen. Prior to blood collection, a veterinarian checked the health status of all mice via videos and confirmed their adaptation to the space environment. Before and after the flight experiment, blood samples were taken by nicking the tail. Mice were transferred to the restrainers and then nicked using disposable scalpels (FEATHER Safety Razor, No. 10). After the nicking or the on-orbit procedure, blood was collected in the capillary tubes and centrifuging was conducted. The procedures for both nicking and clipping were in accordance with NIH guidelines. Micro-computed tomography (microCT) analysis The right femurs of mice were fixed in 70% ethanol and distal regions were analyzed as described11. MicroCT scanning was performed using a ScanXmate-A100S Scanner (Comscantechno). Three-dimensional microstructural image data was reconstructed and BMD was calculated using TRI/3D-BON software (RATOC System Engineering) in accordance with the guidelines. Total RNA was isolated from temporal bone, mandibular bone, spleen, liver, white adipose tissue, brown adipose tissue, cerebrum, kidney, and thymus. RNA integrity was assessed using an Agilent 2200 TapeStation. For each tissue, 1.0 μg of total RNA was used for further steps. Total RNA samples were subjected to isolation of poly(A)-tailed RNA and library construction using Sureselect Strand Specific RNA Sample Prep Kit (Agilent Technologies), except that total RNA from thymus was subjected to ribosomal RNA depletion using Ribo-Zero rRNA Removal Kit (Illumina) and library construction using the similar step. The libraries were sequenced using HiSeq2500 (Illumina) for 76 cycles of single read and more than 17 million reads were generated per sample. The raw reads were mapped to the mouse mm10 genome using STAR (version 2.6.1)36. Transcripts per million (TPM) values were obtained to measure gene expression using RSEM (version 1.3.1)37. The TPM was normalized in the Subio Platform software (Subio). Genes with mean TPM less than five among all groups were excluded. The PCA was performed using the prcomp function in R software (version 3.5.1; www.r-project.org). R-based heatmap.2 in gplots package was used for generating the heatmap. The differential gene expression analysis was performed using DESeq2 (version 1.22.2)38. GSEA39 (www.broad.mit.edu/gsea) was used to assess space flight-induced changes of gene expression. The GSEA was performed using previously published ageing signatures in Enrichr40 (GSE20425 for liver, GDS3028 for temporal bone, GSE25325 for BAT and GSE25905 for WAT) or 50 hallmarked gene sets of MSigDB v7.1 (www.gsea-msigdb.org/gsea/msigdb/index.jsp). Tissues were fixed in Mildform 10N (Wako Pure Chemical) and processed into paraffin-embedded tissue sections. The sections were stained with hematoxylin and eosin. Lipid droplet size was measured by using BZ-X800 (Keyence), and adipose cell number was calculated from their tissue weights and droplet sizes41,42. To visualize hepatic lipid content, livers were fixed with 4% paraformaldehyde and embedded in OCT (Tissue Tek). The frozen sections were stained with Oil Red O (Sigma Aldrich) and counterstained with hematoxylin. Plasma analysis by NMR spectroscopy Plasma metabolites of the blood samples obtained from inferior vena cava were analyzed using NMR spectroscopy17,18. Plasma metabolites were firstly extracted using a standard methanol extraction procedure using 50 µL of plasma per sample. The supernatant was transferred to a new tube and evaporated. Each dried sample was suspended in a 200-µL solution of 100-mM sodium phosphate buffer (pH 7.4) in 100% D2O containing 200-µM d6-DSS. NMR experiments were performed at 298 K on a Bruker Avance 600-MHz spectrometer equipped with a CryoProbe and a SampleJet sample changer. Standard 1D NOESY and CPMG (Carr-Purcell-Meiboom-Gill) spectra were obtained for each plasma sample. All data were processed using the Chenomx NMR Suite 8.3 processor module (Chenomx). Metabolites were identified and quantified using the target profiling approach implemented in the Chenomx Profiler module. The concentration of metabolites was analyzed by a PCA on SIMCA13.0.0 (Umetrics). Plasma analysis by mass spectrometry The plasma from tail blood sample was collected from the capillary tube and stored at −80 °C until analysis. Plasma (4 μL per analysis) were prepared using the protocol for the Absolute IDQ® p400 HR Kit (Kit400), which includes a detailed standard operating procedure (SOP) protocol with documentation for sample preparation, instrument setup, system suitability testing, and data analysis. The Kit400 quantifies 408 metabolites, and includes calibration standards, internal standards, and quality control samples. The ultra-high-performance liquid chromatography (UHPLC) system consisted of an online degasser, auto sampler, dual pump, and column oven (UltiMateTM 3000 RSLC system, Thermo Fisher Scientific), and a quadrupole Fourier transform mass spectrometry (FTMS, Q Exactive Orbitrap system). The operating conditions of the UHPLC-FTMS system and the details of data analysis followed the protocol of the Kit400 analysis43. Statistical and reproducibility Data points represent biological replicates. Comparisons between groups were conducted using one-way ANOVA with Tukey–Kramer test or Wilcoxon–Mann–Whitney test. Data were considered statistically significant at p < 0.05. Further information on research design is available in the Nature Research Reporting Summary linked to this article. The data discussed in this publication have been deposited in NCBI’s Gene Expression Omnibus44 and are accessible through GEO Series accession number GSE152382 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE152382). All relevant data are available from the corresponding authors upon reasonable request. Supplementary Data 1 contains gene set enrichment analysis (GSEA) of the gene expression changes shown during the space flight in wild-type mice. Source data for graphs and charts presented in the main figures are provided in Supplementary Data 2. Demontis, G. C. et al. Human pathophysiological adaptations to the space environment. Front. Physiol. 8, 547 (2017). Pecaut, M. J. et al. Genetic models in applied physiology: selected contribution: effects of spaceflight on immunity in the C57BL/6 mouse. I. Immune population distributions. J. Appl. Physiol. 94, 2085–2094 (2003). Baqai, F. P. et al. Effects of spaceflight on innate immune function and antioxidant gene expression. J. Appl. Physiol. 106, 1935–1942 (2009). Behnke, B. J. et al. Effects of spaceflight and ground recovery on mesenteric artery and vein constrictor properties in mice. FASEB J. 27, 399–409 (2013). Gridley, D. S. et al. Changes in mouse thymus and spleen after return from the STS-135 mission in space. PLoS ONE 8, e75097 (2013). Andreev-Andrievskiy, A. et al. Mice in Bion-M 1 space mission: training and selection. PLoS ONE 9, e104830 (2014). Ulanova, A. et al. Isoform composition and gene expression of thick and thin filament proteins in striated muscles of mice after 30-day space flight. Biomed. Res. Int. 2015, 104735 (2015). Cancedda, R. et al. The Mice Drawer System (MDS) experiment and the space endurance record-breaking mice. PLoS ONE 7, e32243 (2012). Tavella, S. et al. Bone turnover in wild type and pleiotrophin-transgenic mice housed for three months in the International Space Station (ISS). PLoS ONE 7, e33179 (2012). Sandonà, D. et al. Adaptation of mouse skeletal muscle to long-term microgravity in the MDS mission. PLoS ONE 7, e33232 (2012). Shiba, D. et al. Development of new experimental platform ‘MARS’-Multiple Artificial-gravity Research System-to elucidate the impacts of micro/partial gravity on mice. Sci. Rep. 7, 10837 (2017). Matsuda, C. et al. Dietary intervention of mice using an improved Multiple Artificial-gravity Research System (MARS) under artificial 1. NPJ Microgravity 5, 16 (2019). Yamamoto, M., Kensler, T. W. & Motohashi, H. The KEAP1-NRF2 System: a thiol-based sensor-effector apparatus for maintaining redox homeostasis. Physiol. Rev. 98, 1169–1203 (2018). Hosoya, T. et al. Differential responses of the Nrf2-Keap1 system to laminar and oscillatory shear stresses in endothelial cells. J. Biol. Chem. 280, 27244–27250 (2005). Indo, H. P. et al. Changes in mitochondrial homeostasis and redox status in astronauts following long stays in space. Sci. Rep. 6, 39015 (2016). Itoh, K. et al. An Nrf2/small Maf heterodimer mediates the induction of phase II detoxifying enzyme genes through antioxidant response elements. Biochem Biophys. Res. Commun. 236, 313–322 (1997). Koshiba, S. et al. The structural origin of metabolic quantitative diversity. Sci. Rep. 6, 31463 (2016). Koshiba, S. et al. Omics research project on prospective cohort studies from the Tohoku Medical Megabank Project. Genes Cells 23, 406–417 (2018). Saigusa, D. et al. Establishment of protocols for global metabolomics by LC-MS for biomarker discovery. PLoS ONE 11, e0160555 (2016). Kim, J. W. et al. Characterizing genomic alterations in cancer by complementary functional associations. Nat. Biotechnol. 34, 539–546 (2016). Tominari, T. et al. Hypergravity and microgravity exhibited reversal effects on the bone and muscle mass in mice. Sci. Rep. 9, 6614 (2019). Tadaka, S. et al. jMorp: Japanese multi omics reference panel. Nucleic Acids Res. 46, D551–D557 (2018). Huber, J. D. Diabetes, cognitive function, and the blood-brain barrier. Curr. Pharm. Des. 14, 1594–1600 (2008). Tahergorabi, Z. & Khazaei, M. The relationship between inflammatory markers, angiogenesis, and obesity. ARYA Atheroscler. 9, 247–253 (2013). Lehrke, M. & Lazar, M. A. The many faces of PPARγ. Cell 123, 993–999 (2005). Suzuki, T. et al. Molecular mechanism of cellular oxidative stress sensing by Keap1. Cell Rep. 28, 746–758.e744 (2019). Lönnqvist, F., Nyberg, B., Wahrenberg, H. & Arner, P. Catecholamine-induced lipolysis in adipose tissue of the elderly. J. Clin. Investig. 85, 1614–1621 (1990). Weil-Malherbe, H., Smith, E. R. & Bowles, G. R. Excretion of catecholamines and catecholamine metabolites in Project Mercury pilots. J. Appl. Physiol. 24, 146–151 (1968). Miller, R. A. et al. Glycine supplementation extends lifespan of male and female mice. Aging Cell 18, e12953 (2019). Hashizume, O. et al. Epigenetic regulation of the nuclear-coded GCAT and SHMT2 genes confers human age-associated mitochondrial respiration defects. Sci. Rep. 5, 10434 (2015). Suzuki, T. et al. Regulatory nexus of synthesis and degradation deciphers cellular Nrf2 expression levels. Mol. Cell Biol. 33, 2402–2412 (2013). Cho, H. Y. et al. Linkage analysis of susceptibility to hyperoxia. Nrf2 is a candidate gene. Am. J. Respir. Cell Mol. Biol. 26, 42–51 (2002). Cuadrado, A. et al. Therapeutic targeting of the NRF2 and KEAP1 partnership in chronic diseases. Nat. Rev. Drug Discov. 18, 295–317 (2019). Yagishita, Y., Fahey, J. W., Dinkova-Kostova, A. T. & Kensler, T. W. Broccoli or sulforaphane: is it the source or dose that matters? Molecules https://doi.org/10.3390/molecules24193593 (2019). Skoko, J. J. et al. Loss of Nrf2 in mice evokes a congenital intrahepatic shunt that alters hepatic oxygen and protein expression gradients and toxicity. Toxicol. Sci. 141, 112–119 (2014). Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29, 15–21 (2013). Li, B. & Dewey, C. N. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC Bioinform. 12, 323 (2011). Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014). Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl Acad. Sci. USA 102, 15545–15550 (2005). Kuleshov, M. V. et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 44, W90–W97 (2016). Parlee, S. D., Lentz, S. I., Mori, H. & MacDougald, O. A. Quantifying size and number of adipocytes in adipose tissue. Methods Enzymol. 537, 93–122 (2014). Wang, J. et al. Phytol increases adipocyte number and glucose tolerance through activation of PI3K/Akt signaling pathway in mice fed high-fat and high-fructose diet. Biochem. Biophys. Res. Commun. 489, 432–438 (2017). Thompson, J. W. et al. International Ring Trial of a high resolution targeted metabolomics and lipidomics platform for serum and plasma analysis. Anal. Chem. 91, 14407–14416 (2019). Edgar, R., Domrachev, M. & Lash, A. E. Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res. 30, 207–210 (2002). We would like to thank Norishige Kanai (astronaut) for the onboard operation, and Toshiaki Kokubo and Noriko Kajiwara (JAXA visiting veterinarians) for monitoring mouse health. We also thank Naoko Ota-Murakami, Fumika Yamaguchi, Masumi Umehara, and the members of the mouse health check team, for performing daily onboard health checks, Ramona Bober, Autumn L. Cdebaca, Rebecca A. Smith for animal care and ground experiment supports, Hirochika Murase, Hiroaki Kodama, Yusuke Hagiwara, and members of hardware development team for MHU hardware preparation and operations, Kohei Hirakawa, Teruhiro Senkoji, Haruna Tanii, Motoki Tada, Yuki Watanabe, Kayoko Lino, Hiromi Sano, Yui Nakata, Hiromi Suzuki-Hashizume, Eiji Ohta, Osamu Funatsu, Hideaki Hotta, Hatsumi Ishida, Mariko Shimizu, and members of JEM operational team for the research coordination, Takahashi Ueda and Tomohiro Tamari for animal preparations, Hong Xin and Grishma Acharya for landing site operational supports, and Sayaka Umemura, Laura Lewis, Charles E. Hopper, Jennifer J. Scott Williams, Robert Kuczajda for international coordination. This work was selected as a space rodent research study for JAXA’s feasibility experiments using ISS/Kibo announced in 2015, and also supported in part by MEXT/JSPS KAKENHI (19H05649 to M.Y. and 17KK0183, 18H04963, 19K07340, to T. Suzuki), Takeda Science Foundation (M.Y., and T. Suzuki.) and the Smart Aging Research Center, Tohoku University (M.Y.). This work was also supported in part by the grants JP19km0105001, JP19km0105002 and SHARE. The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Cite this article Suzuki, T., Uruno, A., Yumoto, A. et al. Nrf2 contributes to the weight gain of mice during space travel. Commun Biol 3, 496 (2020). https://doi.org/10.1038/s42003-020-01227-2 Transcriptome analysis of gravitational effects on mouse skeletal muscles under microgravity and artificial 1 g onboard environment Scientific Reports (2021) Nuclear factor E2-related factor 2 (NRF2) deficiency accelerates fast fibre type transition in soleus muscle during space flight Communications Biology (2021)
4 Reading maps 4.1 Understanding the relationship between data and space A map on its own is meaningless. Try showing one to a person from a culture which does not include mapmaking as we know it. A map is neither a picture nor a story – unless we know how to ‘read’ it. You have already developed ‘reading skills’ which will help in reading maps. For example, noting the title and the sources are common to all the uses of evidence in the social sciences. Critical awareness is vital in recognising how mapmaking involves selection, distortion and generalisation just as with text, photographs and statistics. None of these issues is necessarily a problem so long as you know about them. Maps, then, are an abstraction involving science and technology, imagination and skill and, above all, decisions. In order to bring a map alive, therefore, we need to know about the codes and conventions that lie behind its production and, also, to understand both the obvious messages and the underlying, often hidden, meanings. Fundamental to producing a map is the relationship between data (the information you wish to display and convey) and space, both the geographic space being represented and the space available on the sheet of paper. Data is represented by points (to show location) and lines (to show connections), by symbols (to convey features) and by colour and/or shading (to represent areas). These are the codes which are part of the language of maps. The other side of that basic relationship between data and space is the classic problem of representing a spheroidal earth (or, globe) on a plane surface. The resulting ‘projections’ are often controversial. There are also conventions which are observed in mapmaking. Look at Map 6 which highlights the main conventions. Some of the features on the diagram will be self-evident and, as noted above, you already know about viewing titles, sources, dates and authors critically, so I shall not discuss those things any further. Other conventions, like ‘projections’, may need some explanation. Our diagram does not mention the use of colour and our illustrations in this text are in black, white and shades of grey. Colour can play a major role in conveying messages on a map; for example, the use of red to show the British Empire or of black for a country to which ‘we’ are hostile. How colour or shading are used is important in reading maps because it may carry an otherwise hidden message. In the following discussion of the conventions in Map 6, I shall follow another reading convention and work from the top-down. ‘Orient’ is associated with the rising of the sun in the east and, in earlier times, many maps were drawn with east at the top. The convention of ‘north at the top’ is a northern hemisphere notion (compare with map 6 – click on 'view document' below) which possibly derives from the importance of the North (Pole) Star to the (North) Western maritime nations. The importance of the orientation of a map lies in the need to be able to ‘read’ directions (e.g. if north is at the top of the map, then we know that something in the lower left-hand corner is to the south-west of something in the centre). Orientation can be manipulated to make places seem more or less important, usually by putting them at the top – as with North. On a more personal level, it can be important to know the orientation of a map of a street or estate of houses in order to know at what time of day the sun will shine into the living room! Click to view Map 7 which shows Australia at the top. The grid or graticule The grid is the ‘net’ of lines on a map which are used to establish a location. A rectangular grid can be used at, say, national level where the curvature of the earth's surface does not create significant distortion of distances when transferred to the flat page. The British and Irish Ordnance Surveys have their own grids which are printed on maps of England, Scotland, Wales and Ireland. They assume north to be at the top of each vertical line but, if this were really the case, they would not converge at the Poles. A graticule represents the meridians running north-south and the parallels of latitude running east-west. What goes to the centre of a map will become the focus of attention and it can be manipulated according to the purpose of the map. On world maps produced in Western Europe, it is common to find Europe on or near the centre. This was reinforced by the international agreement which fixed the previously movable Prime Meridian (or 0 degrees of longitude) to run through Greenwich in southern England. This probably reflected Britain's political power, though it is sometimes said to be a tribute to John Harrison, the English clockmaker who invented the accurate means of establishing longitude. However, maps produced in the USA, for example, often have the Americas in the centre. Maps should have a key (sometimes called a legend) which describes and explains the symbols used to depict features on the map. We may be accustomed to the symbols used by our Ordnance Survey, but maps are made by many different people and organisations for many different purposes and it is important to check how lines, dots, icons, shadings and colour are used. Ideally, each map will carry its own key. Scale is literally the distance on the map related to distance on the ground. For example, one centimetre on the map may represent one kilometre. Choice of scale is important because, as the area covered becomes larger, the scale is said to be smaller, and the smaller the scale the more detail is lost. For example, individual buildings become subsumed in a monocoloured patch representing a town – or even just a dot; twists and turns in rivers are represented as straight lines. A boundary measured on a small-scale map could seem to be much shorter than when measured at a larger scale because all the irregularities would be missing from the calculation. If you can find a map of part of the British or Irish coast and compare it with a small map of the whole of the British Isles, you will observe a similar phenomenon. If you hear a reference to, say, ‘300 miles of coastline’ you might pause to wonder how it was measured. Scale can be deliberately manipulated to ensure the inclusion or exclusion of certain things, with a technical excuse available to mask a deliberate ploy. As noted above, this is about representing the globe on a flat sheet of paper. The classic image is of peeling an orange by slitting the skin down the lines of the segments and then trying to press the pieces of orange peel on to a flat surface. The result is some slight bulges and a lot of gaps! The point is that, no matter how we represent ‘the world’ on a flat surface, we lose some major property of scale or shape. Mercator's projection successfully represented direction, was quite good for shape with smaller outlines, but was bad for area and distance. Of the many attempts to improve on Mercator, one of the commonly used is the Peters projection which elongates landmasses but is better than Mercator in terms of area. You can see the world represented using the two different projections by opening Map 8 (Mercator) and Map 9 (Peters) below. Click on the 'view document' links and the maps will open in separate windows.. Click to view Map 8 (Mercator). Clickto view Map 9 (Peters). Because the Peters map is ‘equal area’, no place looks too big or too small, but Africa is made to look twice as long as it is wide. In fact, Africa is as wide as it is long. Compromise between the size distortion of Mercator and the shape distortion of Peters is reached by making ‘cuts’, which goes back to our notion of flattening out the orange peel. Cuts can be made in the oceans but, for this kind of projection to be used to represent the whole world, it is necessary to make a cut into the Asian continent southwards from the north coast. Clearly, this is a problem if you are interested in that part of the world! Because no projection can truly represent the globe (the world) we need to be aware of the strengths and weaknesses which the various projections have and evaluate the resulting maps accordingly. If you have an atlas to look at, you will find several different projections used in representing different parts of the world. Your atlas might also have a section which explains something about the different methods of projection. Basically, we cannot have a map of the world which is accurate for size and shape, distance and direction. We have to decide what we want to show and select a projection that best meets our purposes. These purposes may be sincere or misleading – and there is great capacity within projections to distort and mislead. However, at a more local level within, say, Britain or Ireland, we can largely ignore the problem of the curvature of the earth's surface and use a simple grid to construct a map.
© Copyright by Roshaunda D. Cade ALL RIGHTS RESERVED Historian Jim Cullen defines minstrels as white men in blackface who pretend to be black and “pride themselves on the verisimilitude with which they re-create African-American life and customs. But these are not simple acts of imitation or homage – the routines they enact are wildly, even grotesquely, exaggerated” (57). Cullen’s definitions place males at the site of racial crossover (begging questions of gender identities and roles in minstrel performance), set up the paradox of blacks imitating whites imitating blacks, and suggest that audiences determine the authenticity of racial performances. Similarly, literary historian Laura Browder places blackface minstrelsy as a caricature of black life that “flattens the topography of the racial landscape to black and white” (49). In other words, blackface minstrelsy allows for only white and black to exist in the US, while ignoring other races. She further characterizes minstrelsy as a white performance of “inauthenticity,” which creates a hyper-whiteness. This whiteness, which emerges from the performers’ ability temporarily to “embody” cultural norms associated with blackness, depends “on denying blacks the same options for self-transformation” (49, 50). Browder’s stance on minstrelsy allows whites to put on blackness in order to further highlight their whiteness, and in many ways, their American-ness. Denying blacks this ability to change race as easily as clothing also denies them access to the body politic of the US. Sociologist Howard Sacks broadens Browder’s arguments by maintaining that minstrelsy is “a vehicle for reinstating tradition” (44). Minstrelsy develops into an inauthentic performance that establishes and upholds authentic whiteness and American-ness. Performance theater historians Harry Elam and David Krasner argue that minstrelsy, as a uniquely American performance, not only affords whites the opportunity to employ perceived characteristics of blackness, but also allows whites to master the baser aspects of human nature that are theoretically inscribed on black bodies. Acting as black people allows minstrel performers to disregard white cultural norms and taboos. Elam and Krasner add that the male-domination of minstrelsy dictates that ideas of black women become inscribed on white male bodies (23). This theory denies white women opportunities to overcome the baseness of humanity by forbidding them the ability temporarily to put on blackness, and this male-domination further removes black women from participating in mainstream American cultures. I include all of the above nuances in my definition of minstrelsy. I add that minstrelsy is a performance, based on racial stereotypes, that includes many of the traditions of the form but does not necessarily entail blacking up. [Eric Lott describes minstrel shows as formulaic and including comic dialogue and cross-dressed wench performances (5). F. James Davis describes minstrelsy as the “singing-dancing-comedy characterization portraying black males as childish, irresponsible, inefficient, lazy, ridiculous in speech, pleasure-seeking and happy” (51)] I use blackface or blackface minstrelsy to designate a minstrel performance that makes use of a blackening agent. The definitions of minstrelsy deal largely with acting and with audiences intentionally accepting the pretend as real. Definitions of passing also deal with subterfuge and audience interpretations but somehow carry malevolent connotations in America’s racially charged cultures. Literary critic Elaine K. Ginsberg asserts that passing is about the “boundaries established between identity categories and about the individual and cultural anxieties induced by boundary crossing. Passing is about specularity: the visible and the invisible, the seen and the unseen” (2) and is often motivated by the quest to attain the perceived awards of another race (3). Literary theorist Juda Bennett adds that the metaphoric term passing most likely stems from the pass given to slaves to facilitate their travel. “The ‘pass’ is a slip of paper that allows for free movement, but white skin is itself a ‘pass’ that allowed for some light-skinned slaves to escape their masters” (36). Both of these definitions concentrate on gaining access to socially restricted spaces and have social and individual ramifications. Theorist Randall Kennedy defines passing as a deception that allows a person to “adopt certain roles or identities from which he would be barred by prevailing social standards in the absence of his misleading conduct” (157). Kennedy cites the classic American racial passer as the “white Negro” and distinguishes the passer from a “mistaken” person who, “having been told that he is white, thinks of himself as white” (157). Kennedy’s argument points to the intentional duplicity involved in racial passing, a treachery that involves circumventing and undercutting American societal norms regarding race. Noting the artifice rather than the duplicity in racial crossover, film scholar Linda Williams distinguishes between posing and passing, arguing that “whites who pose as black intentionally exhibit all the artifice of their performance – exaggerated gestures, blackface make-up – [but] blacks who pass as white suppress the more obvious artifice of performance. Passing is performance whose success depends on not overacting” (176). Posing, then, reflects the theatrical roots of minstrelsy, but passing speaks to the subtleties involved in both passing and minstrelsy. Kenneth Price, a literary historian, adds that passing involves enacting a “narrative or an identity dependent on fabrication” (90). Unlike the previously cited critics, cultural theorist Gayle Wald underscores neither the artifice nor the duplicity of passing. Instead, Wald defines passing as a practice that stems from subjects’ desires to control their racial definition, “rather than be subject to the definitions of white supremacy” (6). Wald further characterizes passing as a transgression of the social boundary of race that seeks not “racial transcendence, but rather struggles for control over racial representation in a context of the radical unreliability of embodied appearances” (6). Wald emphasizes the unreliability of bodily markers of race, and challenges people to control their self-definition. This obfuscation of the line between black and white, along with the call to self-empowerment, confronts the predominance of white supremacy. These critics inform my definition of passing, which I view as a social transgression, rooted in duplicity, that brings performers closer to self-definition as they chip away at notions of white supremacy that govern US society. Regarding white supremacy, philosopher George Yancy maintains that America operates under a mainline white center that has the “power to create an elaborate social subterfuge, leading both whites and nonwhites to believe that the representations in terms of which they live their lives and understand the world and themselves are naturally given, unchangeable ways of being” (11). In other words, this white center induces people to believe that representations associated with race are innate rather than constructed, thus making race appear immutable and blackness seem forever destined to marginality far from the white core of mainline American cultures. David Roediger urges people to fight against the centrality of white supremacy (240). His pleas acknowledge, however, that white supremacy prompts people to cling to white privilege if they can claim it and aspire to it if they cannot. Following the vein of white supremacy, philosopher Lewis Gordon defines white normativity as the “schema in which whites serve as the exemplification of the human being and the presumption of what it means to be human” (181). Gordon postulates that for those outside the sphere of normativity, the only way to attain normalcy is to become what they are not – blacks can only become “normal” by self-identifying as white, and conversely, whites can only be “abnormal” by appropriating characteristics ascribed to blackness. In other words, racial passing serves as a vehicle for blacks to achieve normativity and participate in mainstream American cultures, and minstrelsy functions as a channel through which whites can abandon the norms of mainline US cultures. The subterfuges that characterize passing lie both in the act itself as well as in the white center that creates the perceived need to pass. This view equates US citizenship with normativity and white supremacy, and it is against this bulwark that the authors under consideration pummel. I study passing and minstrelsy in the selected texts because the action in all of them occurs shortly after the 1850 Fugitive Slave Law comes to prominence, and because they all consider how slave women use motherhood and race in order to gain access to and enjoy the benefits of U.S. citizenship. While Stowe published Uncle Tom’s Cabin more than 40 years before the publication of Twain’s Pudd’nhead Wilson, the two texts treat the same time period, as do Clotel and The Bondwoman’s Narrative. Stowe, Brown, and Crafts’ works come on the heels of the 1850 Fugitive Slave Law, which required the extradition of escaped slaves from their homes in free states back into the throes of slavery. The laws demonstrated that U.S. blacks were property – not human and certainly not U.S. citizens. The Fugitive Slave Law also fueled the abolitionist cause, as did Stowe’s novel. By setting his novel in the early 1850’s, Twain deliberately sets his work as contemporary with the other authors and with the issues of their time. All of these novels treat antebellum issues of race and citizenship. Additionally, all four texts treat both blackface minstrelsy and passing, not one or the other, in close proximity in the text. In all of the texts, passing and minstrelsy, at some point, take place in the same female body.
While building blocks lack the bells and whistles of many other toys, kids love them! And that’s a good thing because there are many benefits of building block toys for kids. Blocks offer children of all ages benefits in the following areas. Physical and cognitive development, language, social, science, and math skills. Here’s a look at some of the specific benefits children gain when playing with blocks. Fine Motor Skills and Hand-Eye Coordination The very act of picking up blocks and stacking those blocks one on top of other blocks helps young children develop their fine motor skills and hand-eye coordination. As young kids begin stacking blocks, they also begin to learn about gravity and balance. They learn that if they don’t stack their blocks just right they will fall and they will have to start stacking them all over again. When kids are given blocks that are different colors, they can learn about color recognition. Building blocks are a wonderful way for your child to begin to recognize various shades of the same colors. Children may also begin to sort into sub categories. For example, they may group all of the green blocks together and then sort the green blocks into smaller piles according the shade or the size. Blocks are a great way for kids to learn about and practice their sorting skills. They may sort by color, size, or shape of the block. They also learn how to recognize and pick out blocks with special features such as letters or numbers. And sorting blocks that have letters or numbers on them can help kids develop their letter and number recognition skills. Blocks also give children the opportunity to practice counting skills. Kids have tons of fun counting the number of blocks they can stack on top of each other, or how many blocks it takes to build a wall, or how many pieces are needed to make a roof for a building they have built. Building with blocks helps children with their spatial awareness, as they quickly become proficient at judging the amount of space a certain block needs. In order for a child to build using blocks, they need to develop concentration and attention to detail. Whether making a tower of single blocks or actually building a bridge or building, a child needs to concentrate on what they are doing in order to be successful. Parents are often surprised to find a child that can’t sit through a 10 minute cartoon can spend 30 minutes or more trying to get one small part of a building project completed without any prompting. Building with blocks helps kids develop problem solving skills. Not everything they attempt to build works according to how they think it should. So in order to complete a project successfully, they need to employ problem solving skills (often through trial and error) to solve a glitch. When allowed the freedom to solve building problems on their own, it’s really amazing to see the innovative solutions that kids come up with! Encourages Following Directions As children get older and want to build more complicate structures, they learn to follow either pictorial or written directions. This is an important skill that a child will be able to use throughout their life. Being able to look at a picture and replicate it is an important skill and one that can be quite difficult to master. Blocks provide a way for kids to practice and master this skill while having fun and not feeling a lot of pressure. Development of Language Skills Building with blocks helps children develop their language skills, as they may want to ask for help or explain what they’re building and how they’re building it. Kids actually enjoy explaining how they were able to create something to anyone who is genuinely interested and often will show others how with step-by-step directions. For many children, building is more fun if they are building with parent, sibling or friend. By working with others, kids are developing important social skills such as sharing and teamwork. They’re also learning to listen the ideas of others and expressing their own ideas, and perhaps even learning to compromise. Applying Their Imagination to Real World Problems Playing with blocks allows children to apply their growing imagination to real world problems. As they work to make their fantasy building project come to life, they’re learning to think outside the box so they can build whatever they dream up in their mind. Going Beyond Blocks To Use Other Materials Children quickly learn to combine other objects and materials with their blocks for various building projects. For example, a child may use blocks to build a fort and then a scrap of cloth tied to a small dowel for a flag. Or they may build a castle and then use blue clay or Play Doh to make the water in a moat. Although building blocks are one of the simplest early childhood toys, they provide kids with many important benefits. With so much focus on technology in the world today, it’s nice that a simple classic toy like building blocks can be so good for our kids!
Several scientific studies show that respiratory problems of urban citizens have a high correlation with the presence or absence of certain species of lichens in the cities. The last Global Burden of Desease (GDB) shows that at least 15,000 deaths a year are attributable to air pollution. During the workshop we will: - Show how to identify the most common lichen species in the city of Barcelona. - Explain the relationship between the presence of each species and the air pollution level. - Show how to use a citizen science platform to monitor the presence and abundance of lichen species in your city. - Conduct a field trip to practice with the platform Natusfera before planning your future activities with students. The event takes place at Centre Cultural Casa Orlandai, Barcelona. Natusfera is a citizen science platform created to record, organise and share naturalistic observations. Natusfera want to foster participation of nature enthusiasts, knowledge of the natural world and its exploration. The workshop is organised and facilitated by CREAF (Ecological and Forestry Application Research Centre), an ECSA member based in Barcelona (Spain). "Training the trainers" workshops aim to build capacity in biodiversity monitoring to facilitate youth participation in citizen science activities.
A Answers (3) Your teeth are covered with plaque, a sticky film of bacteria. After you have a meal, snack or beverage that contains sugars or starches, the bacteria release acids derived from dietary sugars that attack tooth enamel. Repeated attacks can cause the enamel to break down and may eventually result in cavities. Caries is the disease process that leads to cavities or the "holes" in the mouth that we are all too familiar with. This is an infectious process that is caused by a few different families of bacteria that infect the mouth very early in life, generally thought to be aby age 2. Once the mouth of a child is infected with these caries causing bacteria, everytime he or she is exposed to sugary substance or anything that contains carbohydrates for that matter, the caries causing bacteria begin to produce and deposit acid onto the tooth surfaces. When this continue over days, weeks and months on areas that are not regularly cleansed with a tooth brush and properly exposed to fluoride (2 minute of foamed tooth paste is best during brushing), the hard enamel of the teeth breaks down resuting in dental "cavity" or a "soft hole" in the tooth. As the bacteria contiue to grow and produce more acid, more and more of the tooth structures and minerals are lost until this process get close to the dental pulp or the nerve of the tooth. The nerve of the tooth usually will respond by pain signals as it is now exposed to hot, cold, pressure or sweets that can not reach it easily and stimulate it. We get cavities in teeth from a substance in the mouth called plaque, which sticks to teeth and contains acid-producing bacteria that can cause tooth decay and a cavity or hole in the tooth. When we eat sugary or starchy foods, acids are produced that attack and break down the enamel or protective surface of our teeth, which, over time, can also lead to decay of the inner part of a tooth. This content reflects information from various individuals and organizations and may offer alternative or opposing points of view. It should not be used for medical advice, diagnosis or treatment. As always, you should consult with your healthcare provider about your specific health needs.
Eduction motor works on the principle of electromagnetic induction. When a three phase supply is given to the three phase stator winding, a rotating magnetic field of constant magnitude is produced as discussed earlier. The speed of this rotation magnetic field is synchronous speed Ns r.p.m. Where f = supply frequency. p = Number of poles for which stator winding is wound. This rotating field produces an effect of rotating poles around a rotor. Let direction of rotation of this rotating magnetic field is clockwise as shown in the Fig. 1(a). Now at this instant rotor is stationary and stator flux R.M.F. is rotating. So its obvious that there exists a relative motion between the R.M.F. and rotor conductors. Now the R.M.F. gets cut by rotor conductors as R.M.F. sweeps over rotor conductors. Whenever conductors cuts the flux, e.m.f. gets induced in it. So e.m.f. gets induced in the rotor conductors called rotor induced e.m.f. This is electro-magnetic induction. As rotor forms closed circuit, induced e.m.f. circulates current through rotor called rotor current as shown in the Fig.1(b). Let direction of this current is going into the paper denoted by a cross as shown in the Fig. 1(b). Any current carrying conductor produces its own flux. So rotor produces its flux called rotor flux. For assumed direction of rotor current, the direction of rotor flux is clockwise as shown in the Fig. 1(c). This direction can be easily determined using right hand thumb rule. Now there are two fluxes, one R.M.F. and other rotor flux. Both the fluxes interact with each as shown in the Fig. 1(d). On left of rotor conductor, two fluxes cancel each other to produce low flux area. As flux lines act as stretched rubber band, high flux density area exerts a push on rotor conductor towards low flux density area. So rotor conductor experience a force from left to right in this case, as shown in the Fig. 1(d), due to interaction of the two fluxes. As all the rotor conductors experience a force, the overall rotor experiences a torque and starts rotating. So interaction of the two fluxes is very essential for a motoring action. As seen from the Fig. 1(d), the direction of force experienced is same as that of rotating magnetic field. Hence rotor starts rotating in the same direction as that of rotating magnetic field. |Fig 1 (d| Alternatively this can be explained as : according to Lenz’s law the direction of induced current in the rotor is so as to oppose the cause producing it. The cause of rotor current is the induced e.m.f. which is induced because of relative motion present between the rotating magnetic field and the rotor conductors. Hence to oppose the relative motion i.e. to reduce the relative speed, the rotor experiences a torque in the same direction as that of R.M.F. and tries to catch up the speed of the rotating magnetic field. So, Ns = Speed of rotating magnetic field in r.p.m. N = Speed of rotor i.e. motor in r.p.m. Ns – N = Relative speed between the two, rotating magnetic field and the rotor conductors. Thus rotor always rotates in same direction as that of R.M.F. Can N = Ns ? When rotor starts rotating, it tries to catch the speed of rotating magnetic field. If it catches the speed of the rotating magnetic field, the relative motion between rotor and the rotating magnetic field will vanish ( Ns – N = 0). In fact the relative motion is the main cause for the induced e.m.f. in the rotor. So induced e.m.f. will vanish and hence there can not be rotor current and the rotor flux which is essential to produce the torque on the rotor. Eventually motor will stop. But immediately there will exist a relative motion between rotor and rotating magnetic field and it will start. But due to inertia of rotor, this does not happen in practice and motor continues to rotate with a speed slightly less than the synchronous speed of the rotating magnetic field in the steady state. The induction motor never rotates at synchronous speed. The speed at which it rotates is hence called subsynchronous speed and motor sometimes called synchronous motor. ... N < Ns So it can be said that rotor slips behind the rotating magnetic field produced by stator. The difference between the two is called slip speed of the motor. Ns – N = Slip speed of the motor in r.p.m. This speed decides the magnitude of the induction e.m.f. and the rotor current, which in turn decides the torque produced. The torque produced is as per the requirements of overcoming the friction and iron losses of the motor along with the torque demanded by the load on the rotor.
|Office Hours:||By appointment.| Session Five: Learning and Motivation/Gagne's Theory of Instruction Instructor notes written by J. David Perry, Ph.D., Indiana University. Until they reach a certain age, all U.S. children must attend school, whether they want to be there or not. And even college students may be lacking in motivation. They may be in college only because of family expectations, or they may be in a particular course because it is required rather than because it interests them. Whatever the reason, enhancing student motivation has long been understood as an important part of the teaching-learning process. This unit deals with systematic, research-based efforts to understand the roots of motivation and to identify what teachers and students can do to enhance it. Keller¹s ARCS model Keller¹s ARCS model attempts to identify the necessary components of motivation in instructional settings. These are said to be Attention, Relevance, Confidence, and Satisfaction. Gaining attention is perhaps the easiest of the requirements to satisfy‹at least for most learners. Suggestions include framing new information in such a way that it arouses curiosity, proposes a mystery to be resolved, or presents a challenging problem to be solved. In addition, varying the presentation style helps to maintain attention. Establishing relevance includes relating new material to the learners own needs and interests, or showing them how they will be able to use the new skills. Relevance may also entail relating new learning to things that are already familiar to learners. In this way it parallels findings from cognitive research that show that new information is most comprehensible when it can be related to what the learner already knows. Building confidence, according to Keller, can be accomplished by strategies such as clarifying instructional goals or letting learners set their own goals, helping students succeed at challenging tasks, and providing them with some control over their own learning. However, other researchers such as Bandura and Weiner have shown that confidence is a complex construct that may need to be further analyzed in order to be supported. Generating satisfaction can best be accomplished by giving learners a chance to use new skills in some meaningful activity. For example, workers who are trained to use a new software package will likely feel satisfaction if they are immediately given an opportunity to apply their new skills to a real work project. In the absence of such natural positive consequences Keller suggests rewards such as verbal praise. Also, he notes the importance of establishing a sense of fairness by maintaining consistent standards and matching outcomes to expectations. Keller urges instructors to analyze the audience or student population to determine the level of intrinsic motivation to learn the new information or skills. Obviously, elaborate planning for extrinsic motivation is not needed when intrinsic motivation is high. Bandura¹s self-efficacy theory Bandura¹s theory holds that the ability to learn new skills and information is influenced by feelings of ³self-efficacy.² Self-efficacy is composed of at least two components: beliefs about whether one is capable of performing (or learning) some task; and beliefs about whether such performance will lead to desirable outcomes. For example, I might believe myself to be capable of learning the basics of automobile maintenance, but I might have no expectation of ever using such knowledge to maintain my own vehicle. Conversely, I might doubt my ability to learn automobile maintenance, even though I wanted very much to be able to change my own oil, etc. In either case, my motivation to perform well in an auto maintenance class would likely be compromised. The theory further suggests that the two most powerful sources of self-efficacy come from the learner¹s own previous experiences with similar tasks, and from observing others¹ experiences. In addition, verbal persuasion and physiological states can contribute to self-efficacy judgments. Note that self-efficacy is unlike general qualities such as self-esteem because self-efficacy can differ greatly from one task or domain to another. I may have very high self-efficacy about learning to play the piano and very low self-efficacy concerning learning calculus. It is also important to note that self-efficacy judgments are not necessarily related to an individual¹s actual ability to perform a task; rather, they are based on the person¹s beliefs about that ability. Weiner¹s attribution theory Attribution theory offers another window into motivation. According to the theory our beliefs about the causes of our successes and failures influence our future motivation. We tend to attribute success and failure to factors that vary along three dimensions: internal-external, stable-unstable, and controllable-uncontrollable. Internal factors are those within the individual, while external factors come from others or the environment. So, if I did very well on a physics test, I might attribute my performance internally to the fact that I studied for eleven hours, or externally to the thought that it was a very easy test. Using the same example, I might attribute my good performance to a stable factor, such as my high aptitude for science, or to an unstable factor‹I just got lucky. Similarly, I might attribute it to a controllable factor‹the amount of effort I expended, or to an uncontrollable factor‹the teacher made a mistake in grading my test. As you might expect, these attributions can have considerable influence on the motivation to perform. When one attributes performance largely to internal factors and controllable factors, motivation tends to be higher. When one attributes performance largely to external, uncontrollable factors, motivation tends to be lower, since it appears that the outcomes are beyond the individual¹s control. The results for the stable-unstable dimension are less clear. For example, if I believe that my ability to learn in some domain is generally high, then stability is a positive factor; but if I believe my ability is low, then stability is a negative Instructor notes: Gagne's instructional design theory Introduction to Gagne Gagne's work has been particularly influential in training and the design of instructional materials. In fact, the idea that instruction can be systematically designed probably can be attributed to Gagne and a handful of others. It's interesting to speculate how his early work in Air Force training may have shaped his theory. I wonder if it might have evolved differently had he been working with college students, or 3rd graders? Gagne's theory is more properly classified as an instructional theory, rather than a learning theory. A learning theory, you will recall, consists of a set of constructs and propositions that account for how changes in human performance abilities come about. An instructional theory seeks to describe the conditions under which one can intentionally arrange for the learning of specific performance outcomes. Instructional theories are often based on one or more learning theories, but there is rarely a simple correspondence between the two. Gagne's instructional theory has three major elements. First, it is based on a taxonomy, or classification, of learning outcomes. Second, it proposes particular internal and external conditions necessary for achieving these learning outcomes. And third, it offers nine events of instruction, which serve as a template for developing and delivering a unit of instruction. Gagne's taxonomy of learning outcomes The notion of different "levels" of learning or knowing something is a very useful one in education. You have probably been in or observed a class where the teacher said she or he wanted to help students achieve high-level skills such as being able to analyze problems, evaluate cases, etc.; but when you looked at the test items for the class, they mostly had to do with memorizing terms and definitions. This is a "learning-levels" problem. For example, what does it mean to ask if someone "knows" a concept such as "analysis of variance," (ANOVA) the statistical procedure that some of us have encountered? Do we want to know if they can Gagne and others thought it was important for teachers and instructional designers to think carefully about the nature of the skill or task they wanted to teach, then to make sure that the learner had the necessary prerequisites to acquire that skill. Gagne also stressed that practice and assessment should match the target skill. In other words, if we want someone to know when to use an ANOVA, and be able to use it to answer real questions, then it is of little use to test them only on their ability to write the formula. Of the five categories of learning outcomes Gagne proposes, the one that seems to have gotten the most attention is intellectual skills. It is important to understand that the five sub-categories of intellectual skills are believed to be hierarchical. That is, for a given skill at, say, the level of "defined concepts," there should be underlying discriminations and concrete concepts that must first be mastered. A common error in understanding Gagne¹s intellectual skill classifications is assuming too ³high² a level for discriminations and concrete concepts. Remember that, according to Gagne¹s definition, a discrimination is a very low-level skill. It is simply the ability to recognize that one object or class of objects differs from another. But discrimination does not include the ability to name the class of objects; if the learners can do that, they have acquired a concept. Similarly, remember that a concrete concept is one that can be defined entirely by the physical, perceptual features (appearance, sound, smell, etc.) of the object or event. If it takes any abstract reasoning ability, then it is a defined concept. Here's an abbreviated definition of each of Gagne¹s outcome categories and sub-categories: According to Gagne's theory, the way to determine the prerequisites for a given learning objective is to construct a learning hierarchy. A learning hierarchy (sometimes called a task analysis) is constructed by working backwards from the final learning objective. Suppose, for example, that the desired learning outcome is to be able to be able to balance one's checkbook upon receiving the monthly bank statement. We would ask ourselves, what are the component skills of balancing a checkbook? They might include things such as, identifying the relevant information on the bank statement, accurately entering deductions and deposits in the check register, and knowing to add back to one's ending balance any outstanding checks in order to reconcile the checkbook balance with that indicated on the bank statement. Assuming we decided that these were, in fact, the three component skills, we would then need to analyze each of these into more basic component skills. How many levels "deep" would we need to go in such a hierarchy? We could continue to work backwards until we reached such basic skills as reading, adding, and subtracting. However, the general rule is that one should continue the analysis until reaching the level of skills that we can reasonably expect the target learners to already possess.It is important to note that a learning hierarchy is not the same thing as a procedure, although there is some overlap between these concepts. To follow the example above, if I were going to describe the procedure for balancing a checkbook, the guiding question would be, "What is the sequence of steps that one needs to carry out in order to balance a checkbook?" But, for a learning hierarchy, the question is, "What are the intellectual skills one needs to have mastered in order to balance a checkbook?" The learning hierarchy is a central idea in Gagne's learning/instructional design theory. According to the theory, one cannot adequately plan instruction without first identifying a measureable learning outcome and constructing a learning hierarchy for that outcome. The conditions of learning A central notion in Gagne's theory is that different kinds of learning outcomes have different internal and external conditions that support them. The external conditions are things that the teacher or instructional designer arranges during instruction. The internal conditions are skills and capabilities that the learner has already mastered (such as those that would be revealed by a learning hierarchy). The events of instruction Gagne's nine proposed "events of instruction" are a sequence of steps to guide the teacher or instructional designer. According to the theory, using this sequence should help to insure that the learner masters the desired objective. The framework has been adapted for use in a variety of classroom settings, including college teaching. However, you can probably see that adapting the "events" to many classroom settings is problematic. Most teachers do not use the kind of language contained in this framework (e.g., terms such as "presenting the stimulus", or "eliciting performance"). In fact, the whole idea of framing a course as a series of skills that can be practiced and performed by students is an unfamiliar concept to some teachers. Think back to some of your own college courses. What skills did you acquire in history, philosophy, or biology courses? Did you get a chance to practice these skills in class? How were you assessed on them? Learning Activities for Session Five Back to Top
NAPLAN (National Assessment Program – Literacy and Numeracy). ACARA (Australian Curriculum, Assessment and Reporting Authority). SCSEEC (Standing Council on School Education and Early Childhood). Acronyms that are sure to cause confusion for most, yet are commonly used by professionals in the education sector. Along with an entirely separate language of jargon, these terms are used without mercy to obfuscate and create a sense of mystique, as well as for the simple reason of using mental prototypes. It's not hard to find examples of educational jargon and acronyms, they are used constantly in the public. What can be surprising is the way in which educators will talk to each other. To give an example, it is not uncommon for a teacher to say “His arousal level was really high when he walked in, so I used selective attending when he was being disruptive”. A plain English translation? “He walked in in a really grumpy mood, so I let him sit there and play on his phone.” Reading through educational research is even worse, with an acronym soup making a mess of things. There has to be a reason for this particular language though. And there is. It comes down to mental prototypes and shortcuts. To explain briefly, a mental prototype is how we link words to ideas. If I say the word table, everyone has a picture of a table in their head. If you say to a teacher 'arousal', then that links to a particular explanation in their head (around the levels of stress hormones in a student, what might have caused that, and how best to deal with a student with a high level of arousal). Acronyms and jargon also allow educators to share ideas without having to go into a great deal of detail. Once they start talking about pedagogy and curriculum, the jargon is a way of keeping track of complex concepts and ideas. This can lead to problems though. There's an old adage – familiarity breeds contempt. When you start to talk about things with an acronym, it becomes very easy to just think of them as that string of letters. This means that it is harder to have a deep understanding of the term and concept. The mental shortcut becomes the term, and a lot of that meaning is lost. Secondly, mental prototypes are individual. The table that you thought of earlier isn't the same as my table, or anyone else's really. Sure, they'll have similar features (some number of legs, probably four, and a top surface). But there are some serious differences. Is the table made from wood, or metal, or maybe glass? Is it square, rectangular, round? This is the problem with using jargon and acronyms, their use presupposes that everyone really is talking about the same thing. Of course, the number of acronyms that are used is increasing on a daily basis (a school might use ASOT (Art and Science Of Teaching) as their pedagogical basis and SWPBS (School Wide Positive Behaviour Support) for their behaviour management). By itself, this isn't a problem. But not everyone is aware of the latest trend, or system, or national body, and not everyone is willing to admit they don't know. It is common to see a term being used in a staff meeting followed by whispered conversations throughout the room as people try to work out what was just said. The problem isn't only with educators. If they have difficulties keeping up with all the acronyms, what about the parents and students? Some terms have become very familiar, like NAPLAN, but if you ask the students who sit the test – can they tell you what it means? Basically, they would say it's a big scary test. Ask a parent about ACARA, or even worse SCSEEC and they'll probably give you a blank look. Education is all about working with everyone involved, which includes parents and most importantly students. When these terms are used without explanation it makes it even harder to communicate. This lack of communication is what will destroy relationships between parents, students, and educators. Without those relationships, education simply cannot happen. Relationships and shared understandings make for good learning. Excessive use of jargon makes these relationships harder to create and maintain. All the acronyms, all the jargon, it serves a purpose. It can create a commonality amongst educators, a shared specific language that lets them pass around and manipulate complex ideas with ease. It can also lead to taking shortcuts and not really exploring the issue underneath that term. When it comes to communicating with other people involved in education, mainly the students themselves, then these terms can simply mean the student will disengage. After all, good pedagogy is about developing meaningful and deep understandings via relational transactions between all stakeholders. Good teaching is about working with the kids to make sure they understand what you're on about.
Objective of the drill: Improving reactive speed and observing surroundings in order to do tasks correctly. Explanation of the drill: Two teams are on the floor facing each other on two sides of the net. By connecting commands to tasks, the players have to think before they do something. As can be seen in the video, the coach can link numbers to a team and make them work with volleyballs. Duration of the drill:
Incoherency – несвязность, бессвязность, непоследовательность Nicolaus Copernicus was the first astronomer to formulate a scientifically-based heliocentric cosmology that displaced the Earth from the center of the universe. His epochal book "On the Revolutions of the Celestial Spheres” is often regarded as the starting point of modern astronomy and the defining epiphany that began the Scientific Revolution. Although Greek, Indian and Muslim savants had published heliocentric hypotheses centuries before Copernicus, his publication of a scientific theory of heliocentrism, demonstrating that the motions of celestial objects can be explained without putting the Earth at rest in the center of the universe, stimulated further scientific investigations and became a landmark in the history of modern science that is known as the Copernican Revolution. Among the great polymaths of the Renaissance, Copernicus was a mathematician, astronomer, physician, classical scholar, translator, Catholic cleric, jurist, governor, military leader, diplomat and economist. Copernicus proposed that the planets have the Sun as the fixed point to which their motions are to be referred; that the Earth is a planet which, besides orbiting the Sun annually, also turns once daily on its own axis; and that very slow, long-term changes in the direction of this axis account for the precession of the equinoxes. This representation of the heavens is usually called the heliocentric, or "Sun-centred”. According to a later horoscope, Nicolaus Copernicus was born on February 19, 1473, in Torun, a city in north-central Poland on the Vistula River south of the major Baltic seaport of Gdansk. His father, Nicolaus, was a well-to-do merchant, and his mother, Barbara Watzenrode, also came from a leading merchant family. Nicolaus was the youngest of four children. In the Commentariolus, Copernicus postulated that, if the Sun is assumed to be at rest and if the Earth is assumed to be in motion, then the remaining planets fall into an orderly relationship whereby their sidereal periods increase from the Sun as follows: Mercury (88 days), Venus (225 days), Earth (1 year), Mars (1.9 years), Jupiter (12 years), and Saturn (30 years). This theory did resolve the disagreement about the ordering of the planets but, in turn, raised new problems. To accept the theory's premises, one had to abandon much of Aristotelian natural philosophy and develop a new explanation for why heavy bodies fall to a moving Earth. It was also necessary to explain how a transient body like the Earth, filled with meteorological phenomena, pestilence, and wars, could be part of a perfect and imperishable heaven. In addition, Copernicus was working with many observations that he had inherited from antiquity and whose trustworthiness he could not verify. In constructing a theory for the precession of the equinoxes, for example, he was trying to build a model based upon very small, long-term effects. And his theory for Mercury was left with serious incoherencies. It was not until Kepler that Copernicus's cluster of predictive mechanisms would be fully transformed into a new philosophy about the fundamental structure of the universe. Exercise 10. Answer the questions: 1. Who is Nicolaus Copernicus? 2. Describe the theory of heliocentrism. 3. What scientific fields was Nicolaus Copernicus interested in? 4. When and where was Nicolaus Copernicus born? 5. Do you know anything about the scientist”s parents? Exercise 11. Give the English equivalents to the following expressions: научно обоснованный, Вселенная, стартовая точка, эпохальный, гелиоцентрические гипотезы, расследование, Возрождение, переводчик, современная наука, физик, ежегодно, зажиточный, гороскоп, главный морской порт, самый младший, в движении, увеличиваться, разногласие, в свою очередь, новое объяснение, тяжёлые тела, движущаяся Земля, в добавление, античность, верифицировать, основанный на, оставшиеся планеты Exercise 12. Give the Russian equivalents to the following expressions: Muslim savants, motions of celestial objects, putting the Earth at rest, the center of the universe, landmark, classical scholar, the fixed point, orbiting the Sun, long-term changes, precession, a leading merchant family, an orderly relationship, the theory's premises, transient body, imperishable heaven, cluster, predictive mechanisms. Exercise 13. Read the following words and say what Russian words help to understand their meaning: theory, hypothesis, correlate, test, deduction, result, experiment, atom, nature, crystal, substance, regular, interpret, systematic, argument, structure. Exercise 14. Give the initial forms of the following words and find them in the dictionary: starting, defining , centuries, scientific, investigations, mathematician, daily, representation, later, central, youngest, orderly, sidereal, disagreement, explanation, imperishable, predictive. Lomonosov, Mikhail Vasilyevich Vocabulary of the text Substantial contribution – значительный вклад Mosaic - мозаика ©2015-2020 megaobuchalka.ru Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. (581) Почему 1285321 студент выбрали МегаОбучалку... Система поиска информации Мобильная версия сайта Нет шокирующей рекламы
Good Beginnings 13: Parents Go to Class to Learn How to Nurture Children's School Success Graciela Gomez says her 5-year-old son, Timothy, is compassionate, smart, and friendly—and she tries to nurture his self-esteem daily so he will be ready to succeed in kindergarten next month. Gomez and two dozen other parents in LAUSD's Queen Anne Place Ready for School Center in West LA are learning how by improving their own parenting skills, setting realistic expectations, and reinforcing positive behavior, they can increase their preschoolers' chances for academic success. Part of this week's "Good Beginnings" talk about children's early learning, health, and safety. "We emphasize that parents are their children's first and best teachers," said Deborah Johnson Hayes, an LAUSD early mental health coordinator. "If parents feel good about their ability to help their children learn, the children are likely to have more confidence about their own ability to succeed, so it all starts with the parents." Tips for Building Early Self-Esteem 1. Help your child feel special and appreciated. Focus on strengths rather than weaknesses and set aside special time each week alone with each of your children. 2. Help your child to develop problem-solving and decision-making skills. For example, when your children have difficulty, ask them to think about a couple of ways of resolving the situation and use role playing to demonstrate possible outcomes. 3. Be empathetic, not critical. Instead of saying, "Why don't you listen to me?" or "Why don't you use your brain?", let children know you understand they're having difficulty and problem-solve together to find an alternate approach. 4. Provide opportunities for children to help. By allowing them to display their competence in simple ways like setting the table or putting away toys, we give children a chance to demonstrate their worth. 5. Be consistent in reinforcing positive behavior. Provide regular praise for specific actions, such as "I really like the way you shared your toy with your baby sister," to encourage good behavior. Related Topics: Ready for School According to child development experts, newborns have the capacity to learn many...Read More
There are two approaches to calculating country GDP: the income approach and the expenditure method. The income approach adds up the per capita income over a specific period of time. It includes the gross compensation to employees, gross profits incurred by production units and taxes (after deducting subsidies). However, the expenditure method utilizes the total government spending, consumption, investment and net exports to calculate the GDP. The formula to compute country GDP is GDP = C + G + I + NX C = consumer spending G = sum of government spending I = sum of businesses spending NX = net exports (calculated as Exports - Imports) The Gross National Product or GNP considers the final goods and services produced by factors of production that are owned by a particular country's residents over a period of time. Although the GDP and GNP are very similar in nature, they differ on the concept of production in foreign-owned companies. Unlike the GDP, country GNP does not take into account the factors of production owned by foreign nationals in a particular country. However, it utilizes production of goods and services at an overseas unit owned by the particular country’s resident.
In today’s world, almost everything is powered using electricity. Power can be generated from renewable and non-renewable resources, but the generation of electricity from a renewable resource is more reliable and less harmful than the non-renewable resource. Harmful gases like carbon-di-oxide and methane are emitted into the atmosphere in the process of electricity generation using non-renewable resources. The generation of electricity from the sun using solar panels is the most promising way to fulfill the global energy need. Solar power is a renewable source of energy which is abundantly available in nature thus it can be utilized in a good way for power generation. Solar panels turn the light(photons) from the sun to electricity. But the efficiency of solar panels is affected by environmental effects like dust accumulation, temperature, and humidity. The cleaning of these panels is done manually. The manual cleaning has the following drawbacks like damage of panels, injuries to people and movement difficulties. By using an Automated robot for cleaning the solar panel increases efficiency when compared to conventional cleaning methods without any major drawbacks. In this project, Arduino UNO microcontroller board is used to program the robot. The following are the components used to build the setup. Arduino UNO: It is a microcontroller board based on ATmega328P. It is an open-source electronics prototyping platform that allows the user to generate interactive electronic projects. It is programmed using Arduino IDE by connecting it to a computer through a cable. DC motor: It converts electrical energy into mechanical energy. DC motor is used to move the robot and to rotate the wiper. Frame: The frame is designed based on the solar panel dimension and all the other components are attached to this frame. Motor Driver: It is a current amplifier used to convert low-current signal to high-current signal. The frame is build based on the solar panel dimension. Cleaning brush or wiper is attached to the motor that moves along the length of the solar panel in the vertical direction results in mopping action, which cleans the panel. The frame also consists of another motor which helps in the linear motion of the robot. So that all the surface of the solar panel is covered. The movement of the robot is achieved with the help of DC motors coupled with Gears and wheels. Arduino board is used to control the rotation of wiper and the movement of the robot. By using robots to clean the panel reduces the threat to human life and it can be installed for rooftop solar panels as well thus it is difficult to access for cleaning.Kit required to develop Solar panel cleaning robot:
The motor effect What is it? When we place a wire carrying an electric current in a magnetic field, it may experience a force. This is the motor effect The force produced ia a maximum if the wire is at a 90 degree angle to the magnetic field, and the force will be at zero if the wire is parallel to the magnetic field. This force can be increased simply by: - Increasing the strength of the magnetic field - Increasing the size of the current The direction of the force produced on the wire will be reversed if either: - The direction of the current is reversed - The direction of the magnetic field is reversed When a wire cuts through magnetic field lines a potential difference is induced across the ends of the wire. This therefore shows us that when a magnet is moved into a coil of wire, a P.D is induced across the ends of the coil. So what this therefore means that if the coil is part of a completed circuit, a current will pass though the circuit. A potential difference is only induced when there is movement The size of this P.d can be increased by increasing: - The speed of movement - The strength of the magnetic field - The number of turns on the coil - The area of the coil A transformer consists of two coils of insulated wire, called the 'Primary' and 'Secondary' coils. These coils are wound around the same iron core. When an alternating current (a.c) passes through the primary coil, it produces an alternating Magnetic field around the iron core, which continually expands and collapses. The alternating magnetic field lines pass through the secondary coil and induce a p.d across its ends. If the coil is part of a circuit, an alternating current is produced. - The coils of wire are insulated so that current does not short across either the iron core or the adjacent turns of wire. - The core is made of iron so it is easily magnetised Transformers do not work with d.c current. Many students will forget this and lose valuable marks Transformers and the National grid Transformers are used in the Narional Grid to increase or decrease potential difference. They are used to step up p.d from power stations, this is because the higher the p.d transmitted across the grid, the less energy loss in the cables. Step-down transformers are used to reduce p.d so it is safe to be used by consumers (us) - In a step-Up transformer the p.d across the secondary is greater than the p.d on the primary - In a Step-Down transformer the p.d across the primary is greater than the p.d across the secondary
Mark Alan Canada Several of you expressed interest in the assignment I described in my “Teaching Lexicography” talk [at the DSNA Conference in Barbados]. Here it is: Coin a word and write a dictionary entry for it. Your entry should include all of the components of a standard dictionary entry: headword, pronunciation symbols, and information about part of speech, denotation, and etymology. Thus, you should become very familiar with your own hardback dictionary and understand the form and purpose of each of these components. Don’t worry about using fancy pronunciation symbols; just try to represent the pronunciation of the word with normal letters. Finally, please include a paragraph explaining your reason for coining this word, the type of process of word formation it demonstrates (blending, compounding, acronym, eponym, etc.), labels for the morphemes in the word (free/bound, base/affix, derivational affix/inflectional affix), a sentence or two about the word’s part of speech, and at least one sample sentence. (Length: 250-300 words. Sources: 2. Value: 10 points.) As I explained in my talk, I used this assignment in an English grammar class, but I suspect that one could adapt it for other kinds of classes, such as a class on basic linguistics or lexicography.
Reading, Technology, and Inquiry-based Learning Through Literature-Rich WebQuests For a printer-ready version of this article, click here. Use your browsers back button to return to the Reading Online site. The use of technology in teaching and learning can help bring reading alive for children and young adults. This article focuses on how a technology-rich environment can facilitate the reading experience and help students meet challenging standards while addressing essential questions that bring meaning to learning. Through the use of Internet-based WebQuests, students engage in problem solving, information processing, and collaboration. When these WebQuests are literature based, books become the focal point for reading-centered learning activities. The article describes creation of original WebQuests, but also explores how teachers can locate, evaluate, adapt, and integrate existing resources. |Related Postings from the Archives Introduction | Literature-Based WebQuests | Exploring WebQuests | Creating WebQuests | Literature-Rich Ideas | Eight Strategies | Conclusion | References Some students come to school ready to read and with a wealth of life experiences, while others lack even basic communication skills. Over the past decade, the digital divide has contributed to this disparity: While some students have blossomed through access to technology, others whose families or schools cannot afford computers and technical resources have missed opportunities for global connections and new learning experiences. In many countries, new emphases on meeting government-mandated teaching and learning standards have increased the need to provide children with as many learning opportunities as possible. In the United States, the No Child Left Behind legislation has made it even more critical that educators provide a rich learning environment to address the needs and interests of all children. Building reading-centered, technology-rich learning experiences is an excellent place to start. According to Coiro (2003, online document), for example, Internet-based comprehension tasks...present new purposes for reading, more critical thought processes during reading, and new examples of authentic responses after reading. Technology can help bring reading alive for reluctant readers and for those with limited life experiences. For some children, a book such as The Diary of Anne Frank comes alive through the power of the words themselves. For others, seeing photographs of Anne Frank and the settings in which her diary unfolds expands the reading experience and makes it more concrete and easily understood. Undertaking an Internet-based activity such as The Diary of Anne Frank...In Search of Truth can help children draw the reading and resources together within meaningful, inquiry-based activities. This article focuses on how a technology-rich learning environment can facilitate the reading experience and help students address essential questions that bring meaning to learning, thereby contributing to their ability to meet challenging standards. In order to learn, children and young adults need to find meaning in their class readings and assignments. In designing assignments, teachers need to ask themselves For example, if students are reading a book about the orphan trains that relocated thousands of American children in the late 1800s and early 1900s, they might focus on the following questions: McKenzie (2001, online document) states that answers to essential questions cant be found; they must be invented. Students must find meaning to create insight. Technology-rich activities such as interacting online with survivors of the orphan train experience, writing an editorial for a Web-based journal, or creating a video-based public service announcements focusing on what we learn from history are examples of innovative classroom assignments that help students address essential questions. Back to menu Focus on Literature-Based WebQuests A WebQuest is an inquiry-based approach to learning that helps students explore essential questions. WebQuests provide an authentic, technology-rich environment for problem solving, information processing, and collaboration. This approach, developed in the mid-1990s by Bernie Dodge of San Diego State University, involves students in tasks that make good use of Internet-based resources. Literature-based WebQuests center the experience on reading by using books as the focal point for activities. Tasks might involve children in exploration of the theme, characters, plot, or setting of the book being studied. Because students in any one class may have a range of skills, teachers are often concerned about how to address the reading needs of all class members. Students become frustrated when they are stuck in stagnant ability groups. The use of literature circles and whole-class WebQuest activities can alleviate this problem. Although children may read different books, they are able to participate in a shared experience based on the thematic focus of the WebQuest. For example, in a study of the orphan trains, many books at different reading levels could be used, including A Family Apart and others in the Orphan Train Adventures series by Joan Lowery Nixon; Orphan at My Door by Jean Little; Orphan Train Rider by Andrea Warren; Rodzina by Karen Cushman; and Train to Somewhere by Eve Bunting. In addition, teachers can add online reading experiences to their reading programs. With the orphan train topic, students could read historical accounts from primary resources available on the Internet. For example, they might read a newspaper article from 1886 about the arrival of an orphan train in Kansas or a short photo-rich autobiography by a survivor. As students read the books and online resources, they generate questions. The Internet is a perfect tool for looking for answers. Rather than rushing to do a basic Web search, we encourage teachers to use thematic websites to locate quality online resources for their students to use. For example, the 42eXplore project contains quality Web-based resources for over 300 topics popular across subject area curricula at all levels. The 42eXplore page on orphan trains begins with a description of the topic and then provides links to several good websites that offer content at different reading levels. For example, Orphan Train: The New YorkMissouri Connection from a school district in New York state contains resources, activities, and projects for many grade levels. The 42eXplore page also provides a list of suggested activities and WebQuests, links to websites by children for children and teaching materials, and a list of related words. Back to menu Rather than creating a WebQuest from scratch, many teachers use resources already available online. Start by exploring literature-rich WebQuests that others have created. These projects can be simple or complex, but they share the same basic elements: an introduction, task, information resources, processes, learning advice, and evaluation. For example, the Find Frog and Toad! WebQuest (based on Arnold Lobels popular Frog and Toad series for young children) combines science and language study by asking students to become detectives and find out the differences between frogs and toads. Four easy-to-read websites are linked. Students create a Venn diagram and a wanted poster. If you are looking for materials related to a popular book or topic, there is a good chance that someone has already created a WebQuest that fits your needs. The San Diego State Universitys WebQuest portal offers a matrix of examples that is a good starting place for locating relevant WebQuests. For good literature-rich WebQuests and activities, check out If you are not successful in locating a relevant WebQuest through one of these resources, try entering a phrase such as orphan train WebQuest or animal study WebQuest into your favorite search engine. Using this technique in a Google search yielded results including Riding the Orphan Train, a WebQuest that begins with a scene in New York City depicting a young child who has been caught stealing. When you are exploring WebQuests to meet your needs, you will find that some fit the definition of a WebQuest, while others are simply Web-enhanced lessons or chapter activities. As you begin to compare and contrast options, ask yourself these questions (from Eduscapes Teacher Tap): The WebQuest portal site offers a useful rubric for evaluating WebQuests. Once you find a relevant WebQuest for your students, you will need to consider how it can best be used. Sometimes you will be lucky enough to locate a WebQuest that does not require many modifications. But, in all cases, careful planning and classroom management will increase the success of the project. Use the following questions as you develop a classroom management plan for WebQuest integration: It is also important to remember that WebQuests should be child centered. In other words, they should be written for students, in the language of students. Visual materials, text, and other components should be appealing, interesting, and at the reading level of the students who will be undertaking the WebQuest. What will students find interesting, motivating, or moving? Project overview. Its a good idea to start with an overview that introduces the WebQuest to the entire class or large group. If possible, use an oversize monitor or data projector to display the screens as you move page by page through the project. Provide background information, assignment sheets, and helpful hints. For example, you might read aloud the introduction and task for the Lord of the Flies WebQuest based on William Goldings book. In this WebQuest, students pretend that theyve crash landed on a deserted island and must learn to survive. Technology needs. Consider what technology is needed for the WebQuest activity to be successful. For example, it is effective to kick off the project with a large group activity using a data projector or large-screen monitor. Learning centers or computer clusters for small group use work well for some WebQuests. In cases where each student needs computer access, consider scheduling a session in the school computer lab or checking out laptops if your media center has these available. Timesavers. Making the best use of instructional time is essential. This is particularly important in schools with limited technology access. Consider what elements of the WebQuest can be printed for quick access. Also consider if static webpages can be printed and read from paper. Reserve the Internet-connected computers for visiting websites that are dynamic, interactive, or constantly changing, and for e-mail exchanges and online discussions. Student teams. Many WebQuests build small group activities into the project, with each group member assigned an individual role. For example, there may be a group goal or mission in addition to individual activities. Many teachers who use literature circles start with the roles familiar from that activity. The roles encourage students to focus on different cognitive perspectives related to their reading and draw on different intelligences. At first, the roles may be primarily directed at the reading. For example, for a given chapter, one student may write discussion questions, another visualizes the setting through art, while still another identifies new vocabulary or interesting passages. As these roles become a natural part of the circle, you may shift them so that they become more activity specific to the WebQuest. Many WebQuests are designed with specific roles for students to play. In the literature-based WebQuest, Unfortunately, Mr. Snicket..., which focuses on the popular author Lemony Snicket, students take on the role of characters in The Bad Beginning. Asking students to take on a role of some kind is particularly helpful in differentiating a WebQuest to meet individual needs or to address learning styles. The roles can be static or can rotate through the project. Sometimes jigsaw activities or presentations are used to share ideas across groups. In the case of student teams, its important to consider both individual and team assessments. Project headquarters. Many teachers find that creating a project headquarters in their classroom promotes reading and motivation. Designed as a learning center with tables, chairs, and computers, it might also include notebooks of materials, reading materials, clipboards, a decorated bulletin board of student materials, and real objects related to the topic or book being studied. Student assessment. Multiple assessments are important in literature-based WebQuests. Both process and product assessments should be implemented for individuals and for groups. Checklists and rubrics are common tools for assessments of learning from WebQuest activities. For example, in a WebQuest based on Patricia MacLachlans Sarah, Plain and Tall, a rubric is provided for use in evaluation. Sometimes, you may not be able to locate an existing WebQuest for a particular book, topic, or level. Rather than creating your own WebQuest from scratch, consider adapting a WebQuest. For example, Who Needs a Fairy Godmother, Anyway? A Cinderella WebQuest for Grades 1-2 could be adapted for different grade levels or fairy tales. Consider some of the following areas when adapting a WebQuest. Making links. Sometimes the links provided in WebQuests are no longer active. (Many people call this link rot.) You can deal with this by identifying new links. In other cases, the resources linked in the original WebQuest may not be inactive, but may simply be ineffective in your teaching context. You might locate new resources at different reading levels, with new content, multiple perspectives, or different channels of communication, such as audio or video. For example, in Tale Tale News, students are asked to create a tall tale based on a news article they find online. The links at this WebQuest include adult resources such as CNN, The New York Post, and The Sun. You might include additional links to content designed for readers with lower reading levels, such as Time for Kids and Yahooligans! News. Elements. Mix and match the best elements from a number of WebQuests. Choose the best scenarios, links, processes, products, and assessment ideas from a variety of options. For example, there are many WebQuests available for Gary Paulsens Hatchet; you might like the introduction in one, the task in another, and the resources in a third option. The process packets provided in Survival! Lost in the Canadian Wilderness might be particularly appealing. Level and focus. WebQuests can often be adapted for another level or purpose. You might change the grade level, standards focus, or motivation aspects. If the introduction is boring, enhance the scenario to add interest. If the books are too easy or too challenging, consider new resources. Think about modifying the learning outcomes or products. The WebQuest could also be revised to increase readability. For example, many middle school teachers use WebQuests focusing on Karen Cushmans Catherine, Called Birdy and Midwifes Apprentice when teaching about the Middle Ages. The central character in both books is a girl. Adapting one of these WebQuests to focus on Avis Crispin, which featuers a boy, might be a nice alternative. Region. Some WebQuests are designed around a particular local historical event or natural area. For example, Hoosier Town Water Mystery is set in the U.S. state of Indiana, but it could be adapted for the study of water pollution issues in other locations. Sometimes a WebQuest can be enhanced for a different book or author focus. Extend. Many WebQuests are designed in professional development workshops or university courses. As a result, some are incomplete or become dated; many lack detailed directions, assessments, or resources. In some cases, a WebQuest just needs breadth or depth. Finally, many WebQuests are designed for a range of content areas; not all are literature based. Consider adding a reading component to a WebQuest that is primarily focused on another subject area, such as science or social studies. For example, there are many novels that could be added to the WebQuest The Wright Brothers: From Dayton to Kittyhawk. If you choose to adapt a WebQuest, be sure to give credit to the resources you used. If you take content word for word from another source and you plan to post your project on the World Wide Web, you should get permission from the original developer. In most cases, an e-mail address is provided on the WebQuest. Most people are thrilled to hear that others are using their work. Back to menu Once you feel comfortable with using WebQuests, try creating your own. Some people choose to develop their own materials from scratch using Web development tools. Others prefer to start with resources and models available online, such as the WebQuest templates and design patterns at the WebQuest portal site. Others use services such as Filamentality. Outcomes. Start by deciding what you want to accomplish with the WebQuest. Ask yourself Then, consider whether a resource already exists to fit your needs. If not, then decide to create your own. Introduction. Start with an introduction thats interesting, motivating, relevant, and timely to set the stage for learning. This section should also provide background information. Consider something catchy such as a quotation, poem, or vignette. For example, Digging for Dinosaurs begins with a letter from a paleontologist. For WebQuests with a reading focus, consider starting with a book description, genre definition, photograph to establish the setting, or characters from the book. A WebQuest for The Islander opens with a summary of the book. The introduction for a WebQuest for The Outsiders includes a Robert Frost poem. Poetry Quest also begins with a poem, as does Welcome to Charlie and the Chocolate Factory. Over the Rainbow and Beyond starts by asking visitors to image they are Munchkins. Task. The task is a critical element of the WebQuest. Choose a mission that is doable and interesting. It might include a series of questions to be explored, a summary to be created, a problem to be solved, a position to be debated, or something to create. The task should require thinking. In a literature-rich WebQuest, the task is often related to the characters, plot, theme, or setting of the book or story under study. Rather than asking them to write a report or answer these questions, think of a creative ways students can be encouraged to express themselves. For example, could they create and videotape a skit based on a historical event or conduct an e-mail interview for a career exploration? Could they build a time capsule, design a society, hold a mock trial, persuade a group to build a museum, or design a theme park? In a WebQuest for My Brother Sam Is Dead, students work in teams to design a time capsule filled with documents, artifacts, and personal effects related to the American Revolutionary War. In Kids Court: Finding Justice in Fairy Tales, children conduct mock trials of characters in fairy tales. In The Little House on the Prairie: Explore the Places the Ingalls Lived readers are asked to create a museum. The WebQuest Taskonomy: A Taxonomy of Tasks at the San Diego State University site contains many ideas for designing effective activities for WebQuests. Information resources. Specific, appropriate resources are an essential element of an effective WebQuest. Web documents, experts available through the Internet, searchable databases, books, real objects, and original content are all materials that help extend a novel or other literature. In Nonfiction Rules! students use Web resources to explore nonfiction texts. Many WebQuests, such as Caterpillar Confusion, include both print and online resources. Process. Students need quality materials to facilitate learning. These may include detailed activity descriptions, step-by-step instructions, timelines, and checklists. Resources such as assignments, questions, links to website resources, and descriptions of requirements are often posted in the WebQuest and made available in printed format. For young children, directions and links are often provided with graphic clues and icons. In Meeting in the Mitten, for example, kindergartners click on pictures of mittens to locate information about Jan Bretts book The Mitten. In a WebQuest for Dragonwings by Laurence Yep, students are taken step by step through the process of creating their own newspaper, including the front page, feature story, political cartoons, and letters to the editor. Learning advice. Beyond the directions needed to complete the activities, students often need additional advice. This may include a description of how information or notes should be organized, guiding questions, or directions to follow. Students can be given help or templates for creating timelines, concept maps, cause-effect diagrams, action plans, or other process-oriented activities. In Author Expert, for example, children choose an author of interest, use Kidspiration templates to organize ideas, and with the help of a friendly letter format, write a letter to that author. Evaluation. Student assessment often involves contracts, checklists, or rubrics that relate directly to the processes and products outlined in the WebQuest. In some cases, students are even involved in developing their own assessment tools, such as quiz questions or checklists. In Designing Hermits New Home, students read a book about hermit crabs and write a play. The products are evaluated using a rubric. Conclusion. Many WebQuests contain a culminating activity to bring the project to a close. The conclusion may also remind learners about what theyve learned and encourage them to extend beyond the experience. Other elements. Some WebQuests contain elements beyond the basics. For example, student roles can be a key component in WebQuests. This may simply involve breaking the group into smaller groups, or it may involve different assignments such as writer, artist, and director. In Study Insects with Eric Carle, for example, students choose to learn about crickets, beetles, fireflies, caterpillars, or ladybugs. In addition, most WebQuests provide a teacher resource page with information about standards, lessons, and other resources. Back to menu Although a WebQuest can be designed for any subject, literature-rich WebQuests are particularly interesting because they use technology to bring reading alive for students. From sounds and photographs to movies and primary source documents, the Internet is filled with exciting materials that can provide insight into the theme, plot, setting, or characters of books for children and young adults. Characters. What type of clothing was worn during World War I? What festivals would the Korean characters celebrate? What does an Irish accent sound like? These are the types of questions that students often ask when reading a novel. The Internet provides quick and easy access to resources that can help students learn more about the time and place where the characters live. By using a WebQuest to guide students to these resources, they can gain insights into the experiences, frustrations, and relationships of the characters. In The Culture of War: A Closer Look at Women During the Time of Louise May Alcotts Little Women, for example, students explore the life of the women living during the period of the American Civil War. From The Door in the Wall and the Middle Ages students can learn about life in that time period. Theme and plot. What happened before the book took place? What would have happened if this historical event had turned out differently? What might have caused the conflict or ended it? What are the legal and moral issues involved? Whats the science of the topic? Could the events presented in this book really happen? These are the types of questions students ask about the theme and plot of books. Students enjoy learning about the science of science fiction novels and discussing the social issues raised in realistic fiction. The Internet can provide facts and information to address students questions. Through WebQuests, students can also investigate how a change in characters or setting might change the outcome of an event or problem. In The Journey Continues students learn about Ancient Greek culture and create a sequel to The Odyssey. Settings. Some students in a class may have seen beaches, mountains, cities, or farms, while others may never have left their own community. WebQuests such as one for Natalie Babbitts Tuck Everlasting can help students visualize the settings of books through photographs and drawings. Computer technology can also help students create their own vision of settings with paint software or other programs for illustrating and creating graphics, and through computer-generated modeling. Genres. From science fiction to realistic fiction, WebQuests are a great companion for all types of literature. Historical fiction WebQuests such as Fact or Fiction: An Analysis of Historical Fiction Literature by Elizabeth George Speare and Fact or Fiction (focusing on Scott ODells Island of the Blue Dolphins and Zia) provide students with the opportunity to compare fact and fiction related to real events. Realistic fiction WebQuests such as The Real Stuff help students understand the genre of realistic fiction. WebQuests can also help students identify information about events that happened at the same time or in the same place as those described in the literature. Online resources can guide students to the information needed to build timelines and maps, or even to speculate on the future. Some WebQuests combine genres and subject areas to provide an interdisciplinary approach to reading and learning. For example, in a WebQuest based on The Lorax by Dr. Seuss, students engage in activities related to reading, writing, and science. Authors. Many WebQuests focus on author studies, asking students to explore the life of an author, examine specific genres, or compare books by the same author. For example, the Patricia Polacco WebQuest highlights for students the many books by this author. Connections. WebQuests can focus on a single work or on multiple books with the same theme or topic, or by the same author. Multicultural Cinderella Folk Tales, for example, explores different versions of the same familiar story. Many teachers prefer these sorts of WebQuests, because they rely on a range of materials that can meet students individual reading levels and interests. The individual activities can then be brought together using a literature circle approach. Back to menu With so many options, some teachers may feel overwhelmed by the prospect of combining reading and technology. The following eight strategies will help keep the process simple and maintain a focus on teaching and learning. Paper. Remember the power of paper. Although technology is a motivating and effective tool, sometimes paper provides a simple solution. Books are still the most portable reading material. Using printed versions of support materials such as guides, word lists, worksheets, organizers, rubrics, and webpages can often be much more efficient than relying on computer-based materials. For example, the WebQuest Teddy Bears and Bears in Literature contains a number of resources that can be printed for young children. Pictures. Pictures are also very powerful. A majority of your students are probably visual learners. Photographs, clip art, graphics, drawings, book covers, illustrations, and student-produced art are all tools to promote understanding for students whose learning is enriched by visual representations. For example, in a WebQuest based on Mildred Taylors Roll of Thunder, Hear My Cry, book cover art and historical photographs bring the novels time period alive for students. Interaction. Collaborative projects, e-pals, online experiments, and online book reviews are just a few examples of the ways Internet can be used to promote interaction. Some WebQuests ask students to interact with peers, experts, or community members through e-mail. In a WebQuest based on the novel The Pigman by Paul Zindel, for example, eighth graders correspond with retirees in the community. Differentiation. WebQuests can be designed to involve all children in reading, regardless of ability level. Through book assignments, varied roles, and website choices, learners can be challenged at and just above their level. Choice is an important part of differentiation. In Literary WebQuest: The Nature Poets, for example, older learners can choose to focus on Emerson, Whitman, or Thoreau. Learner centered. Keep in mind that WebQuests should be written for students, not as teacher lesson plans. Swimmy and the Deep Blue Sea is a good example of an appropriately written WebQuest for six- to eight-year-olds. When you focus on student motivation, reading level, and quality directions, your WebQuests will help children and young adults become independent readers and learners. Controversy. Students become engaged in activities that require them to critique, debate, and discuss such as those presented in Book Burning: Its Not Just Science Fiction, a WebQuest on the topic of censorship. By providing multiple perspectives and information about current issues and interesting trends, higher order thinking skills are more easily developed. Transformations. Application is an important part of reading. Students need to work with the information they read by discussing, debating, or demonstrating applications of it. They might present their ideas, videotape a skit, or build a product. This transformation of ideas enriches the learning process. Meaningfulness. Students become engaged in activities that are authentic, such as those that present real-world scenarios, experiments, applications, and sharing. Back to menu WebQuests can engage children and young adults in technology-enhanced, inquiry-based learning. By using literature as the focal point for meaningful activities, students learn to connect books with other resources, including websites, video, and communication tools. In addition, learners begin to see the relationships among language arts, math, science, social studies, and other content areas. Literature-rich WebQuests provide teachers with an effective method of promoting inquiry-based learning, organizing resources, and managing classrooms. The addition of other teaching strategies and techniques such as literature circles and thematic materials enhance this learning environment. As teachers seek ways to motivate readers, address individual differences, and promote information fluency, literature-rich WebQuests can be valuable technology tools. Back to menu Coiro, J. (2003, February). Reading comprehension on the Internet: Expanding our understanding of reading comprehension to encompass new literacies [Exploring Literacy on the Internet department]. The Reading Teacher, 56(6). Available: www.readingonline.org/electronic/elec_index.asp?HREF=/electronic/RT/2-03_column/index.html McKenzie, J. (2001). From trivial pursuit to essential questions and standards-based learning. From Now On: The Educational Technology Journal, 10(5). Available: http://www.fno.org/feb01/pl.html About the Authors ||Berhane Teclehaimanot (e-mail) is an assistant professor of curriculum and instruction and director of the Carver Teacher Education Center at the University of Toledo, Toledo, Ohio, USA. He is also the principal investigator for Teachers Info-Port to Technology (TIPT), a project funded by a Preparing Tomorrows Teachers to Use Technology (PT3) grant from the U.S. Department of Education.| ||Annette Lamb (e-mail) is a professor in the School of Library and Information Science at Indiana University-Purdue University at Indianapolis (IUPUI), where she teaches online graduate courses for librarians and educators. As president of Lamb Learning Group, she also conducts professional development workshops and presentations focusing on ways that technology can be effectively integrated in the classroom. Her website, Eduscapes, includes a wide range of resources for educators.| Back to top For a printer-ready version of this article, click here. Citation: Teclehaimanot, B., & Lamb, A. (2004, March/April). Reading, technology, and inquiry-based learning through literature-rich WebQuests. Reading Online, 7(4). Available: http://www.readingonline.org/articles/art_index.asp?HREF=teclehaimanot /index.html Reading Online, www.readingonline.org Posted March 2004 © 2004 International Reading Association, Inc. ISSN 1096-1232
When making portraits, Expressionist artists tried to express meaning or emotional experience rather than physical reality. The artists manipulated their subjects’ appearance to express what cannot be easily seen. In other words, Expressionist artists, primarily working in Germany and Austria during the 1910s and 1920s and still reeling from the carnage of World War I, were less interested in accurately depicting their subject’s facial features than in capturing their psychological state. They used formal devices such as distortion, non-realistic colors, and unusual settings to help to achieve this. The visual or narrative focus of a work of art. A representation of a particular individual. VIDEO: Pressure + Ink: And Introduction to Relief Printmaking Questions & Activities Consider the self-portraits by Oskar Kokoschka and Käthe Kollwitz. In your view, which artist is portrayed more sympathetically? Why might this be? Reflect. Summarize your thoughts in a one-paragraph essay. Many Expressionist portraits featured hands prominently. Käthe Kollwitz was famous for her figures’ large, strong hands, and many of Kokoschka’s figures have their hands in the air. Gestures communicate specific feelings or messages, often with hands. Some gestures can be ambiguous, leading people to interpret them differently. Create. Working with a partner or alone, come up with gestures that communicate each of the following: happiness, sadness, fear, and anger. Look closely at Kokoschka’s Hans Tietze and Erica Tietze-Conrat. Imagine. With a friend or classmate, role-play a conversation between the two people pictured. What might they say? What do their gestures suggest about their relationship? During the 1930s, the Nazi party rose to power in Germany. Many artists and intellectuals were affected by the suppression of political, individual, and artistic rights. The Nazis declared the work of many modern artists, including Expressionists like Kokoschka and Grosz, to be “degenerate.” Their work was confiscated from German museums and eventually displayed in the 1937 Entartete Kunst (Degenerate Art) exhibition in Munich. This exhibition featured a chaotic display of over 650 confiscated paintings, sculptures, publications, and works on paper, all ridiculed in a series of derisive texts. Many works were later sold at auction to private collections or museums; others were burned by Nazi officials. In time, Reich Minister of Propaganda Josef Goebbels ordered a more thorough scouring of public and private art collections in Germany; an estimated 16,000 artworks were confiscated in this manner. Some artworks were never recovered. Research the impact of political events in Germany during this period on the artists in this theme. Write a two-paragraph response with your findings. Create a portrait of someone you know, such as a friend or family member, using any medium: painting, drawing, sculpture, photography, etc. Reflect. As you make your portrait, consider your artistic choices: What do you want other people to know about this person? How did you choose to represent these details? Are there certain characteristics of this person that you excluded? Did you create your portrait from direct observation, from memory, or from a photograph? Why?
Students explore why people get cancer. They explore human cells by taking a close examine their own. take a small sample of the epithelial cells that line the inside of their mouth. They glimpse how scientists investigate inside cells. 19 Views 46 Downloads New Review Do Cell Phones Cause Brain Tumors? It appears everyone has a cell phone, but are they damaging our health? A thought-provoking video addresses this question by pulling together multiple studies from around the world. It explains the methodology and conclusions of each. 8 mins 9th - 12th Science CCSS: Adaptable New Review Who's at Risk for Colon Cancer? Colon cancer is one of the most preventable types of cancer. Scholars learn how colon cancer develops and spreads. They also learn risk factors, tests, and treatments before answering eight comprehension questions. 5 mins 9th - Higher Ed Science CCSS: Adaptable The Surprising Cause of Stomach Ulcers That raging fire in your belly is not necessarily the burrito you had for lunch! Aspiring doctors get an in-depth look at the cause of stomach ulcers with an interesting video. The narrator discusses ulcer treatments of the past, how... 6 mins 6th - 12th Science CCSS: Adaptable Why Do We Have to Wear Sunscreen? Impress upon your learners the importance of using sunscreen to protect their skin throughout life. With this video, they will learn not only about the basics of how much sunscreen to apply and for how long, but they will also have the... 5 mins 8th - 12th Science CCSS: Adaptable
Thomas Hardy Long Fiction Analysis In The Courage to Be (1952), Paul Tillich asserts that “the decisive event which underlies the search for meaning and the despair of it in the twentieth century is the loss of God in the nineteenth century.” Most critics of the literature of the nineteenth century have accepted this notion and have established a new perspective for studying the period by demonstrating that what is now referred to as the “modern situation” or the “modern artistic dilemma” actually began with the breakup of a value-ordered universe in the Romantic period. Thomas Hardy, in both philosophical attitude and artistic technique, firmly belongs in this modern tradition. It is a critical commonplace that at the beginning of his literary career Hardy experienced a loss of belief in a divinely ordered universe. The impact of this loss on Hardy cannot be overestimated. In his childhood recollections he appears as an extremely sensitive boy who attended church so regularly that he knew the service by heart and who firmly believed in a personal and just God who ruled the universe and took cognizance of the situation of humanity. Consequently, when he moved to London in his twenties and was exposed to the concept of a demythologized religion in the Essays and Reviews and the valueless nonteleological world of Charles Darwin’s On the Origin of Species by Means of Natural Selection: Or, The Preservation of Favoured Races in the Struggle for Life (1859), the loss of his childhood god was a traumatic experience. What is often called Hardy’s philosophy can be summed up by one of his earliest notebook entries in 1865: “The world does not despise us; it only neglects us.” An interpretation of any of Hardy’s novels must begin with this assumption. The difference between Hardy and other nineteenth century artists who experienced similar loss of belief is that while others were able to achieve a measure of faith—William Wordsworth reaffirmed an organic concept of nature and of the creative mind that can penetrate it, and Thomas Carlyle finally came to a similar affirmation of nature as alive and progressive—Hardy never made such an affirmative leap to transcendent value. Hardy was more akin to another romantic figure, Samuel Taylor Coleridge’s Ancient Mariner, who, having experienced the nightmarish chaos of a world without meaning or value, can never fully get back into an ordered world again. Hardy was constantly trying to find a way out of his isolated dilemma, constantly trying to find a value to which he could cling in a world of accident, chance, and meaningless indifference. Since he refused to give in to hope for an external value, however, he refused to submit to illusions of transcendence; the only possibility for him was to find some kind of value in the emptiness itself. Like the Ancient Mariner, all Hardy had was his story of loss and despair, chaos and meaninglessness. If value were to be found at all, it lay in the complete commitment to this story—“facing the worst,” and playing it back over and over again, exploring its implications, making others aware of its truth. Consequently, Hardy’s art can be seen as a series of variations in form on this one barren theme of loss and chaos—“questionings in the exploration of reality.” While Hardy could imitate popular forms and create popular novels such as Desperate Remedies, an imitation of Wilkie Collins’s detective novel, or The Hand of Ethelberta , an imitation of the social comedy popular at the time, when he wished to write a serious novel, one that would truly express his vision of humanity’s situation in the universe, he could find no adequate model in the novels of his contemporaries. He solved this first basic problem in his search for form by returning to the tragic drama of the Greek and Elizabethan ages—a mode with which he was familiar through extensive early reading. Another Greek and Elizabethan mode he used, although he was less conscious of its literary... (The entire section is 7,792 words.)
The Need for Humane Education Humane education can play an important role in creating a compassionate and caring society which would take benign responsibility for ourselves, each other, our fellow animals and the earth. As regards our fellow animals, humane education works at the root causes of human cruelty and abuse of animals. There is now abundant scientific evidence that animals are sentient beings, with the capacity to experience ‘feelings’. They have the ability to enjoy life’s basic gifts as well as the ability to suffer emotionally (as well as physically) through cruel or unkind treatment, deprivation and incarceration. This new understanding of the sentience of animals has huge implications for the way we treat them, the policies and laws we adopt, and the way in which we educate our children. 'Sentience' is the ability to experience consciousness, feelings and perceptions; including the ability to experience pain, suffering and states of wellbeing. Humane education is the building block of a humane and ethically responsible society. When educators carry out this process using successfully tried and tested methods, what they do for learners is to: - Help them to develop a personal understanding of ‘who they are’ – recognizing their own special skills, talents, abilities and fostering in them a sense of self-worth. - Help them to develop a deep feeling for animals, the environment and other people, based on empathy, understanding and respect. - Help them to develop their own personal beliefs and values, based on wisdom, justice, and compassion. - Foster a sense of responsibility that makes them want to affirm and to act upon their personal beliefs. In essence, it sets learners upon a valuable life path, based on firm moral values. In a well-structured humane education program, younger children are initially introduced to simple animal issues, and the exploration of animal sentience and needs. Then, gradually, learners begin to consider a whole range of ethical issues (animal, human and environmental) using resources and lesson plans designed to generate creative and critical thinking, and to assist each individual in tapping in to their inbuilt ‘moral compass’. Importantly, humane education has the potential to spur the development of empathy and compassion. Empathy is believed to be the critical element often missing in society today and the underlying reason for callous, neglectful and violent behavior. There is a well-documented link between childhood cruelty to animals and later criminality, violence and anti-social behavior; and humane education can break this cycle and replace it with one of compassion, empathy and personal responsibility. If we are to build stable and peaceful societies, then humane education must play a vital role in childhood development. Research has also shown that humane education has an even wider range of positive social and educational outcomes. These even extend to areas such as: bullying, teenage pregnancies, drug-taking, racism, and the persecution of minority groups. It has also been shown to increase school attendance rates, enhance school relationships and behavior, and to improve academic achievement. Learners who demonstrate respect for others and practice positive interactions, and whose respectful attitudes and productive communication skills are acknowledged and rewarded, are more likely to continue to demonstrate such behavior. Students who feel secure and respected can better apply themselves to learning. Students who are encouraged to understand and live by their own moral compass find it easier to thrive in educational environments and in the wider world. Humane education should be an essential part of a student’s education as it reduces violence and builds moral character. It is needed to develop an enlightened society that has empathy and respect for life, thus breaking the cycle of violence and abuse. Humane education should be an essential part of a student’s education as it reduces violence and builds moral character. It is needed to develop an enlightened society that has empathy and respect for life, thus breaking the cycle of violence and abuse. The development of ethics and values in society is something that we ignore at our peril. These must be included in the schools’ curriculum, with humane education at the core. When animals are abused and badly treated in a home, there’s a strong chance that people are also being abused in that home by way of child abuse, spouse abuse, and/or abuse of the elderly. When a home is not a safe and caring place for animals it is not a safe and caring place for people either. Research by psychologists, sociologists and criminologists has proved the link between animal abuse and human abuse. This research over the last 40 years shows that ‘the first strike’ – a person’s first act of violence – is usually aimed at an animal and should be seen as a danger sign for other members of the family. (Source: Humane Society of the United States: First Strike: The Violence Connection.) There has always been anecdotal evidence supporting the connection between animal cruelty and violent behavior against people. The 'Son of Sam' murderer in New York City, for example, reportedly (Washington Star, 1977) hated dogs and killed a number of neighborhood animals. Another newspaper article (Washington Post, 1979) reported a mass killer as having immersed cats in containers of battery acid as a child. Albert De Salvo, the notorious Boston Strangler, trapped dogs and cats, placed them in orange crates, and shot arrows through the boxes (Fucini, 1978). In addition to this anecdotal evidence, there have now been a number of psychological studies carried out which show links between childhood cruelty to animals and later criminality. In some cases, such acts were a precursor to child abuse. Some of these reports were commissioned by humane societies in an attempt to persuade Government authorities of the seriousness of animal cruelty cases, including the Kellert/Felthouse study. The Kellert/Felthouse study, confirmed a strong correlation between childhood cruelty to animals and future antisocial and aggressive behavior. It stressed the need for researchers, clinicians and societal leaders to be alert to the importance of childhood animal cruelty, and suggested that the evolution of a more gentle and benign relationship in human society might be enhanced by our promotion of a more positive and nurturing ethic between children and animals. Such path-finding studies are of key importance for society and educators alike. Amongst their findings are: - In one community in England, 83% of families with a history of animal abuse had been identified as having children at risk from abuse or neglect; - Of 57 families treated by New Jersey's Division of Youth and Family Services for incidents of child abuse, pets had been abused in 88% of cases, usually by the parent; - A behavioral triad of cruelty to animals, bed wetting and fire setting in childhood is strongly indicative of likely violent behavior in adulthood; and - There is a significantly higher incidence of behavior involving cruelty to animals, usually prior to age 25, in people who go on to commit mass or serial murders. A useful book which brings together research in this area and charts some actions already being taken to address this problem is: 'Child Abuse, Domestic Violence, and Animal Abuse: Linking the Circles of Compassion for Prevention and Intervention' by Frank Ascione and Phil Arkow, and published by Purdue University Press The book can be ordered from online bookshops. Arkow points out: - While being kind to animals is certainly a nice thing to do, and is certainly the right thing to do, it is only when people in leadership positions recognize that animal abuse has adverse effects on humans, that animal maltreatment will become culturally unacceptable and real, lasting changes will be made. - The abuse of animals is often the first step on the slippery slope of desensitization, the first step down that slope of a lack of empathy and violence. - All too often animals are the first victims and what should be seen as a red flag or warning marker, is readily dismissed by parents and teachers as ‘oh well, boys will be boys’, or ‘it’s only a rabbit, what’s the big deal?’ - Children who grow up in abusive environments frequently become abusers themselves. By linking bullying and other antisocial behaviors with animal abuse, teachers can help their students take home a sense of empathy – not just for animals, but also for their peers and family, and a sense of responsibility to their community. Another important book, which includes scientific research on the connection between animal abuse and child abuse is ‘The Link between Animal Abuse and Human Violence’ (2009), which is edited by Professor Andrew Linzey (Director of the Oxford Centre for Animal Ethics), and comprises the work of 36 international academics in fields as varied as the Social Sciences, Criminology, Developmental Psychology, Human Rights, Applied Childhood Studies, Behavioral Science, and Child Welfare. The book can be ordered from online bookshops. ‘The Link between Animal Abuse and Human Violence’ reveals that animal abuse has a domino effect. When adults disrespect, neglect, abuse or harm an animal, it starts a process of desensitization or loss of feeling in our children – they become able to witness the neglect, hurting, harming or killing of an animal without feeling a response. Once children become desensitized, ‘habituation’ quickly sets in. Habituation to neglect and cruelty means that abuse has become a routine part of a child’s life and is accepted as normal. Importantly, desensitization directly opposes the crucial development in early childhood of empathy. Lack of empathy leads to dehumanization because it slows down children’s emotional development, and they are not able to realize their full potential as emotionally mature adults. What is clear from ‘The Link Between Animal Abuse and Human Violence’ is that modern socio-scientific thinking suggests that animal abuse, because of its potential to damage emotional development, can be viewed as a form of child abuse that can lead to lifelong disability including: - Less ability to learn, and learning problems - Less or no ability to build or maintain satisfactory social relationships - Inappropriate behavior and/or feelings In addition, adults who are under-developed emotionally are more likely to resort to violence to solve problems. Abusive adults pass on this handicap to the next generation. When someone is ill-treated or relegated to a demeaning position in society, they often respond by venting their frustration on someone whose societal position is even lower than their own. By destroying or tormenting the weak, such as an animal or a child, the oppressor becomes the master who has, in turn, tortured them. The anger is directed at an innocent instead of the perpetrator of their own victimization, and it is difficult to break the cycle of abuse. Humane education is needed to develop an enlightened society that has empathy and respect for life, thus breaking the cycle of abuse. The aim is to create a culture of caring. It is also a sound investment - working on the prevention of criminality and antisocial behavior, which can have a massive societal cost, both in terms of reduction in 'quality of life' and in financial costs incurred through criminal damage, maintenance of law enforcement systems, court costs, prison systems and juvenile work. An academic whose research supports the view that humane education should be a vital component of every day learning is Dr. Kai Horsthemke from South Africa. He is an educational philosopher at the University of Witwatersrand School of Education, and received international academic acknowledgement when his paper ‘Rethinking Humane Education’ was published in the October 2009 issue of the British journal Ethics and Education. Horsthemke points out: - The increase in violence in South African schools, as elsewhere, has been associated with a general ‘decline in moral values’ - Taking these concepts and principles (justice, equality and rights) seriously requires extending and employing them beyond the human realm - Humane education incorporates guidance in moral reasoning and critical thinking and engages both rationality and individual responsibility - Decline in moral values is counteracted by an approach that combines caring with respect for rights, in order to contribute towards erasing human violence and abuse. Dr. Horsthemke suggests that environmental and humane education may well be “the most reliable way of halting the rapid deterioration of the world and ourselves, having potentially long term benefits for both humans and nonhumans”. The following claims were made for humane education by the US National Parent-Teacher Association Congress in 1933: "Children trained to extend justice, kindness, and mercy to animals become more just, kind and considerate in their relations to one another. Character training along these lines in youths will result in men and women of broader sympathies; more humane, more law-abiding - in every respect more valuable - citizens. Humane education is the teaching in schools and colleges of the nations the principles of justice, goodwill, and humanity towards all life. The cultivation of the spirit of kindness to animals is but the starting point toward that larger humanity that includes one's fellow of every race and clime. A generation of people trained in these principles will solve their international difficulties as neighbors and not as enemies." The practice and reinforcement of kindness, of care and compassion towards animals, through formal and non-formal educational processes, is viewed as having a range of positive spin-offs in terms of pro-social attitudes towards people of a different gender, ethnic group, race, culture or nation. The National Link Coalition is a Resource Center on the Link between Animal Abuse and Human Violence, which includes national and international ‘Link’ coalitions. “In addition to causing pain and suffering to the animals, animal abuse can be a sentinel indicator and predictor - one of the earliest ‘red flag’ warning signs of concurrent or future violent acts. Abusers and impressionable children who witness or perpetrate abuse become desensitized to violence and the ability to empathize with victims. Abuse is often cyclical and inter-generational. The earlier professionals can intervene to break the cycles of violence, the higher the rate of success.” “As a teacher with 30 years’ experience, I do not believe that we can solve violence in our society with high fences and razor wire. If we are to fight violence effectively and uplift our communities for a sustainable future, we will have to reach into the hearts of learners and develop that vital quality called 'empathy'.” Cape Town school teacher Vivienne Rutgers The main objective of humane education is the development and nurturing of EMPATHY. Empathy means: I identify with the way you feel. Empathy is the ability to understand and share the feelings of another being. Simon Baron-Cohen is a Professor of Developmental Psychopathology at the University of Cambridge, England. In his book, The Science of Evil: On Empathy and the Origins of Cruelty, he offers a new theory on what causes people to behave with extreme cruelty. He suggests that ‘evil’ can be explained as a complete lack of empathy. He also looks at social and environmental factors that can reduce empathy, including neglect and abuse. The Science of Evil: On Empathy and the Origins of Cruelty (2011) is published by Basic Books, and can be ordered at online bookshops. Baron-Cohen points out: - As a scientist I want to understand what causes people to treat others as if they were mere objects - The challenge is to explain how people are capable of causing extreme hurt, by moving the debate into the realm of science - Let's start by substituting the concept of 'evil', with the term 'empathy erosion', a condition that arises when we objectify others. This has the effect of devaluing them, and erosion of empathy is a state of mind that can be found in any culture. Empathy, says Professor Baron-Cohen, is like 'a dimmer switch' on a light - with a range from low to medium to high. When empathy is dimmed, it causes us to think only of our own interests. When we are solely in the 'I' mode, our empathy is switched off. Baron-Cohen has developed a scale from 0–6 to measure the differing degrees of empathy among people. Level Zero is when an individual has no empathy at all. At Level 6, an individual displays remarkable empathy. The majority of people fall between Levels 2-4 on the scale. Baron-Cohen's Barometer of Empathy Level 0 - People have no empathy at all. These people find relationships difficult and they cannot understand how another is feeling. They may or may not be cruel to others. - Empathy is the most valuable social resource in our world. It is puzzling that in school or parenting curricula empathy figures hardly at all, and in politics, business, the courts, or policing, it is rarely, if ever, on the agenda. The erosion of empathy is a critical global issue of our time - It relates to the health of our communities, be they small (like families) or big (like nations) - Without empathy we risk the breakdown of relationships, become capable of hurting others, and cause conflict. With empathy, we have a resource to resolve conflict, increase community cohesion, and dissolve another person's pain - We must put empathy back on the agenda. We need to realize what a powerful resource we as a species have, at our very fingertips, if only we prioritize it. Empathy is one of the most frequently cited affective components of moral development (Emde et al., 1987; Gibbs, 1991; Hoffman, 1987). Typically empathy is understood to be natural and to have a biological base as well as to be a source of moral reason and more mature moral affect. However, whilst young children often have an intuitive grasp that actions - such as hitting and stealing - are prima facie wrong, the child's moral concepts do not reflect a fully developed moral system. For example, although young children view it as wrong to keep all of the classroom toys to oneself and not share any of them with the other children (Damon 1977, Nucci 1981, Smetana 1981), pre-schoolers think it is quite all right to keep all of the favored toys to oneself as long as one shares the remainder (Damon 1977, 1980). Thus, while the young child's morality is structured by concepts of justice, it reflects a rather egocentric moral perspective. The early development of empathy helps to prevent further development of this egocentric perspective. Teaching empathy is not just about helping learners to recognize consequences, but also to feel these – even when they relate to others. It turns a self-centered perspective into an ‘other-centered’ and altruistic perspective. This leads to a more enlightened and compassionate outlook. It also leads to a deeper search for the moral compass within. Young children demonstrate a natural feeling for animals, which can be used to develop their empathy and compassion at an early stage. This serves as a firm base for the future moral development of learners. Humane Education is the single biggest medium in our hands today to nurture and develop the gift of EMPATHY in our children "In what other subject do you learn to love, care and protect?"Hewston, Grade 10, participant in a Humane Education pilot project, Cape Town See this YouTube video: Dr. Neil deGrasse Tyson, American astrophysicist and Director of the Hayden Planetarium, discusses the human-animal connection from a scientific standpoint – and makes a call for the development of empathy through humane education for school children. Many people consider empathy and compassion to have the same meaning, and they are frequently used interchangeably. However, they are actually quite different: As we have seen above, empathy is the ability to understand and share the feelings of another. It is an emotional response to a person’s situation or well-being. The ability to empathize can sometimes be developed when you try to understand how another individual may be feeling – imaging yourself in the same situation and thus feeling the same emotions as the person you are feeling empathy for. However, although you feel the same emotions, you do not take actions on your feelings; you do nothing to alleviate the emotions of the person/animal you are feeling empathy for. On the other hand, when you feel compassion, you have more of a desire to take action. You can understand a person or animal’s pain and suffering. You place yourself in the shoes of the individual, but you feel that you want to do something to relieve the pain and suffering. Compassion is an emotion which calls for action. You are motivated to take action to ensure a positive outcome. So, the ultimate aim of humane education should be the development of compassion, with empathy as an important step in this process. This can be encouraged by the inclusion of practical programs to take action for animals, and the development of a volunteering ethos more generally. The word ‘ethics’ is derived from the Greek word ethikos meaning moral. The field of ethics is also called ‘moral philosophy’. Ethics has been defined as a set of moral principles or a code, and would incorporate aspects such as right, wrong, good, evil, duties and responsibilities. They consider what is good for the individual and for society, and the nature of duties that people owe to themselves, one another, animals and the environment. In addition to ethics as a moral code (prescribing what humans ought or ought not to do in terms of right and wrong), ethics also refer to the study and development of one's ethical standards. As laws, social norms and feelings/motivations can deviate from what is ethical, it is necessary to constantly examine and study one's own moral beliefs and moral conduct in order to strive towards lives that meet sound moral standards. Humane education can help children to begin the process of ethical decision-making, and building moral character. Ethics and morals relate respectively to theory and practice. Ethics denotes the theory of right action and the greater good, while morals indicate their practice. ‘Moral’ has a dual meaning. The first indicates a person's comprehension of morality and his capacity to put it into practice. In this meaning, the opposite is ‘amoral’, indicating an inability to distinguish between right and wrong. The second denotes the active practice of those values. In this sense, the opposite is ‘immoral’, referring to actions that counter ethical principles. Values are part of ethics in the sense that they are ideals or beliefs that a person or social group holds dear. They are what people think is right and wrong, good and bad, desirable and undesirable. The world today is experiencing an unprecedented crisis of morals and values. At the same time, there is increasing recognition of the serious impact the destructive and self-obsessive nature of mankind is having on the environment, social relationships and global harmony. Various projects and campaigns are developed in an attempt to address these problems, usually on a piecemeal basis - save this tree, this species, promote peace in a particular region. But in reality, the way to tackle these problems is at source, by beginning the process that will teach children - the citizens of tomorrow – an ethical perspective and a personal sense of responsibility, coupled with a compassionate and caring attitude towards others, animals and the environment. While every person develops his or her strongest notions about ethics early in life, ideas about the right conduct grow and change with experience. Strategies can be employed to strengthen ethical decision-making at all ages, but this is particularly effective in children Practical wisdom cannot be acquired by simply learning general rules. Learners also need to acquire, through critical and creative thinking and practice, those deliberative, emotional, and social skills that enable them to put their general understanding of well-being into practice in ways that are suitable to each occasion. Thus the way in which humane education is taught (see the section on ‘Methodology’) is an important factor in maximizing its contribution to moral development. Lessons should be specifically designed to include critical analysis, creativity and empathy-building in order to help learners to discover their own ‘moral compass’ and to develop their own values systems. So what is a moral compass? We humans have this inbuilt guidance or ‘compass’ that speaks to us of right and wrong. Our duty to our learners is to assist them to reach inside and access or interpret this compass, so its guidance can be used when they are faced with hard decisions and difficult situations. The wisdom they are developing will affect their character, values, and morality. Values that come from the heart provide a foundation of strength and goodness that lasts a lifetime, and can be brought into play whenever new challenges arise. In our new fast-moving world, the development of wisdom is crucial. Humane education has the potential to provide insight and wisdom – which will affect both the morality and the character of learners. In an age where most of education seeks to train the brain, this is education that seeks to open the heart to the promptings, compassion and empathy within. Values that come from the heart provide a foundation of strength and goodness that lasts a lifetime, and can be bought into play whenever humans are challenged by any new situation. "Just then, in my great tiredness and discouragement, the phrase, Reverence for Life, struck me like a flash. As far as I knew, it was a phrase I had never heard nor ever read. I realized at once that it carried within itself the solution to the problem that had been torturing me. Now I knew that a system of values which concerns itself only with our relationship to other people is incomplete and therefore lacking in power for good. Only by means of reverence for life can we establish a spiritual and humane relationship with both people and all living creatures within our reach. Only in this fashion can we avoid harming others, and, within the limits our capacity, go to their aid whenever they need us."Reverence for Life, Albert Schweitzer There have been many attempts to introduce ‘peace education’ in schools. To do this successfully, the root causes of conflict and violence need to be examined, and educational programs developed to address these. This can be complicated – particularly for younger children. However, there are already existing ‘tried and tested’ educational programs available, including humane education – which creates a culture of empathy and caring by stimulating the moral development of individuals to form a compassionate, responsible and just society. There is more information below and in this web resource more generally on why humane education can form a vital part of peace education. Key Sources of Conflict It is scarcely surprising that peace education is given increasing importance in modern society, give the increase of conflict and violence we are currently witnessing. We are becoming more materialistic, more individualistic and selfish, and increasingly driven by the quest for worldly success and prosperity. Growth of economies and acquisitive personal aspirations lead to conflicts over scarce resources. Lack of equity can also contribute to conflict … but even this would not be a problem if these was a spirit of interconnectedness and giving in society. But our priorities and values are changing – mostly to the detriment of wisdom, compassion and – ultimately - our own happiness. We increasingly communicate through trite, short-hand phrases, adopting and justifying ideologies, instead of developing our own insights and wisdom. Soul-searching and personal development are no longer prioritized, as conformity is easier and more likely to gain peer acceptance. Importantly, we are also becoming more urbanized, and losing our deep connection to nature and animals – and often to our human support systems (our families and communities). Working at the Root Peace will not be achieved by patchwork reforms. The development of peace has to begin with understanding ourselves and the nature of the world we live in. As we have seen, we humans have a built in ‘code’ that speaks to us of right and wrong. Our duty to our learners is to assist them to reach inside and interpret this code, so its guidance can be used when they are faced with hard decisions and difficult situations. The wisdom they are developing will affect their character, values, and morality. Values that come from the heart provide a foundation of strength and goodness that lasts a lifetime, and can be brought into play whenever new challenges arise. In our new fast-moving world, the development of wisdom is crucial. Conventional education is the transfer of knowledge to pass examinations and – sometimes – to gain employment. This is significantly lacking for the development of the whole human. In many ‘developing’ countries, education is still by rote, passing on formulaic learning with no development of insights, intelligence and values. In such cases, Universal Primary Education is of little value in providing much-needed life skills? World Animal Net strongly advocates the educational approach for the development of peaceful societies, working at the root of the problem for sustainable change. We consider humane education to be a vital pillar of this work. Humane education should be an essential part of a student’s education as it reduces violence and builds moral character. It can also play a significant role in the development of stable, caring and peaceful societies. In addition to humane education, there are also other educational initiatives that could also help towards the development of stable, caring and peaceful societies, including: morals and values education, emotional intelligence education, conflict avoidance/resolution education and environmental education. Ideally, these programs could be coordinated – and optimum methodologies used - to provide an integrated educational solution. I don't know what your destiny will be, but one thing I do know: the only ones among you who will be really happy are those who have sought and found how to serve."Albert Schweitzer Humane education can play a large role in improving happiness. This is both overall happiness – in terms of total well-being (people, animals and the environment) - and individual happiness. This is because it has the potential to develop learners socially, psychologically and ethically – as well as increasing compassion and empathy, and creating a feeling of interconnectedness with animals, nature and other people. The 2013 World Happiness Report confirmed that ‘social, psychological, and ethical factors are crucially important in individual happiness’. This probably seems somewhat quaint and far-fetched in the modern era (post 1800), where happiness has come to be associated largely with material conditions, especially income and consumption. However, any ‘happiness’ associated with material conditions can only be transient. What is of greater – and lasting – importance is the deep happiness which comes from the inner peace developed from living a life which matters … compassionate and altruistic, and fulfilling our full potential. A life lived in harmony with nature and all life, instead of an ego-centred existence. The World Happiness Report speaks of ‘Eudaimonia’, which is sometimes translated as happiness, and often as ‘flourishing’, to convey the sense of deep and persistent well-being. This kind of virtue not only attends to the individual’s thriving, but also to the community’s harmony. Eudaimonia is the telos, the end goal of human beings; the highest good. In the words of Bertrand Russell: “The happiness that is genuinely satisfying is accompanied by the fullest exercise of our faculties and the fullest realisation of the world in which we live.” In the great pre-modern traditions concerning happiness, whether Buddhism in the East, Aristotelianism in the West, or the great religious traditions, happiness is determined not by an individual’s material conditions (wealth, poverty, health, illness) but by the individual’s moral character. Aristotle spoke of virtue as the key to eudaimonia. This is why the World Happiness report advocates a return to ‘virtue ethics’ as one part of the strategy to raise happiness in society. The Global Economic Ethic (2009) established an overarching global ethical framework with the fundamental principle of ‘humanity’. With the principle of ‘humanity’, the Global Economic Ethic identified four basic values: - Non-violence and respect for life, including respect for human life and respect for the natural environment; - Justice and solidarity, including rule of law, fair competition, distributive justice, and solidarity; - Honesty and tolerance, including truthfulness, honesty, reliability, toleration of diversity, and rejection of discrimination because of sex, race, nationality, or beliefs; and - Mutual esteem and partnership, including fairness and sincerity. As we can see, these are all values that can be derived from humane education – particularly non-violence and respect for all life. Matthieu Ricard, the author of the book ‘Happiness – A Guide to Developing Life’s Most Important Skill’ states: "It is only by the constant cultivation of wisdom and compassion that we can really become the guardians and inheritors of happiness." “Compassion, the very act of feeling concern for other people’s well-being, appears to be one of the positive emotions, like joy and enthusiasm. This corroborates the research of psychologists showing that the most altruistic members of a population are also those who enjoy the highest sense of satisfaction in life.”
The findings may aid in the development of precision antimicrobial therapies GRAND RAPIDS, Michigan (October 3, 2018) — A common gut bacterium uses a unique assembly apparatus to build hair-like structures that help it infect the bladder and kidneys. The findings, published today in Nature, depict how these structures — called pili — are produced, offering new avenues for the development of targeted antimicrobial medications for urinary tract infections. “E. coli bacteria are the predominant cause of urinary tract infections, which affect more than 150 million people around the world annually,” said Huilin Li, Ph.D., a professor at Van Andel Research Institute (VARI) and a senior author of the study. “These infections are often treated with broad-spectrum antibiotics that, despite their importance as a medical tool, are increasingly problematic due to their potential to cause drug resistance and their tendency to disrupt the body’s microbial balance. More precise anti-microbial therapies that target specific bacteria but spare others are desperately needed.” E. coli belong to a large, diverse family of bacteria that reside in the guts of humans and animals, where they are a normal part of the digestive tract’s complex ecosystem. Although most strains of E. coli are not harmful to the gut, they can cause painful infections elsewhere in the body, such as in the urinary tract. In severe cases, the bacteria travel to the kidneys where they can cause a life-threatening condition called acute pyelonephritis, which is marked by swelling, high fever, pain and blood in the urine. Once in the urinary tract, E. coli use hair-like structures called Type 1 pili to latch on to host cells, allowing them — and the infection they cause — to take root. Construction of the pili is a complicated, multi-step process that, until now, has not been clearly delineated. Using a technique called cryo-EM, which makes it possible to image molecular architecture at the atomic level, Li and collaborator David G. Thanassi, Ph.D., of Stony Brook University, visualized a protein complex that allows pili assemblage to occur, enhancing the understanding of E. coli-related urinary tract infections and providing a possible drug target. E. coli are Gram-negative bacteria with two layers of membranes — an outer membrane and an inner membrane. The outer membrane is dotted with proteins that relay messages to and from the cell about their environment. Generally, these proteins are considered to be unsophisticated, moving molecules into and out of the cell via passive diffusion. Li and Thanassi’s new images show an outer membrane protein called FimD usher, working with another chaperone protein named FimC to build the pili, which have adhesive tips that help E. coli stick to cells in the bladder and kidneys, leading to colonization and infection. “FimD usher is exceptionally dynamic. It performs a series of complicated tasks to assemble a pilus — it uses one part of its structure to recruit distinct proteins in an orderly manner, catalyzes subunit polymerization and then hands over the recruited pilus proteins to another part of the structure for secretion to the cell surface. While the structures involved in some of these steps have been determined by others, this study is the first to show how the handover step is accomplished,” Li said. “Surprisingly, the mechanism involves a meeting of two ends of FimD usher, which in a way resembles an ouroboros, the mythical snake eating its own tail.” In the U.S. alone, urinary tract infections result in more than 10 million visits to the doctor’s office and 2 to 3 million trips to the emergency room each year, with an economic burden of more than $3.5 billion annually, according to a 2015 report in Nature Reviews Microbiology. Women and girls have a higher risk of developing urinary tract infections; in fact, more than half the women in the world have reported having a urinary tract infection at some point in their lives. Older populations and children also are at increased risk. Authors include Minge Du, a graduate student at Van Andel Institute Graduate School and the study’s first author, Zuanning Yan, Ph.D., and Hongjun Yu, Ph.D., both members of Li’s laboratory at VARI; Gongpu Zhao, Ph.D., of VARI’s David Van Andel Advanced Cryo-Electron Microscopy Suite; and Nadine Henderson, Samema Sarowar, Ph.D., and Glenn T. Werneburg, M.D, Ph.D., of Stony Brook University. Research reported in this publication was supported by the National Institutes of Health under grant number GM062987 (Thanassi) and Van Andel Research Institute (Li). The content is solely the responsibility of the authors and does not necessarily represent the official view of the National Institutes of Health. ABOUT VAN ANDEL RESEARCH INSTITUTE Van Andel Institute (VAI) is an independent nonprofit biomedical research and science education organization committed to improving the health and enhancing the lives of current and future generations. Established by Jay and Betty Van Andel in 1996 in Grand Rapids, Michigan, VAI has grown into a premier research and educational institution that supports the work of more than 400 scientists, educators and staff. Van Andel Research Institute (VARI), VAI’s research division, is dedicated to determining the epigenetic, genetic, molecular and cellular origins of cancer, Parkinson’s and other diseases and translating those findings into effective therapies. The Institute’s scientists work in onsite laboratories and participate in collaborative partnerships that span the globe. Learn more about Van Andel Research Institute or donate by visiting vari.vai.org. 100% To Research, Discovery & Hope®
BCS theory or Bardeen–Cooper–Schrieffer theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus. It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972. Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas". In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theory, the BCS theory, with Robert Schrieffer. The theory was first published in April 1957 in the letter, "Microscopic theory of superconductivity". The demonstration that the phase transition is second order, that it reproduces the Meissner effect and the calculations of specific heats and penetration depths appeared in the December 1957 article, "Theory of superconductivity". They received the Nobel Prize in Physics in 1972 for this theory. In 1986, high-temperature superconductivity was discovered in some materials at temperatures up to about 130 K, considerably above the previous limit of about 30 K. It is believed that BCS theory alone cannot explain this phenomenon and that other effects are in play. These effects are still not yet fully understood; it is possible that they even control superconductivity at low temperatures for some materials. At sufficiently low temperatures, electrons near the Fermi surface become unstable against the formation of Cooper pairs. Cooper showed such binding will occur in the presence of an attractive potential, no matter how weak. In conventional superconductors, an attraction is generally attributed to an electron-lattice interaction. The BCS theory, however, requires only that the potential be attractive, regardless of its origin. In the BCS framework, superconductivity is a macroscopic effect which results from the condensation of Cooper pairs. These have some bosonic properties, and bosons, at sufficiently low temperature, can form a large Bose–Einstein condensate. Superconductivity was simultaneously explained by Nikolay Bogolyubov, by means of the Bogoliubov transformations. In many superconductors, the attractive interaction between electrons (necessary for pairing) is brought about indirectly by the interaction between the electrons and the vibrating crystal lattice (the phonons). Roughly speaking the picture is the following: An electron moving through a conductor will attract nearby positive charges in the lattice. This deformation of the lattice causes another electron, with opposite spin, to move into the region of higher positive charge density. The two electrons then become correlated. Because there are a lot of such electron pairs in a superconductor, these pairs overlap very strongly and form a highly collective condensate. In this "condensed" state, the breaking of one pair will change the energy of the entire condensate - not just a single electron, or a single pair. Thus, the energy required to break any single pair is related to the energy required to break all of the pairs (or more than just two electrons). Because the pairing increases this energy barrier, kicks from oscillating atoms in the conductor (which are small at sufficiently low temperatures) are not enough to affect the condensate as a whole, or any individual "member pair" within the condensate. Thus the electrons stay paired together and resist all kicks, and the electron flow as a whole (the current through the superconductor) will not experience resistance. Thus, the collective behavior of the condensate is a crucial ingredient necessary for superconductivity. BCS theory starts from the assumption that there is some attraction between electrons, which can overcome the Coulomb repulsion. In most materials (in low temperature superconductors), this attraction is brought about indirectly by the coupling of electrons to the crystal lattice (as explained above). However, the results of BCS theory do not depend on the origin of the attractive interaction. For instance, Cooper pairs have been observed in ultracold gases of fermions where a homogeneous magnetic field has been tuned to their Feshbach resonance. The original results of BCS (discussed below) described an s-wave superconducting state, which is the rule among low-temperature superconductors but is not realized in many unconventional superconductors such as the d-wave high-temperature superconductors. Extensions of BCS theory exist to describe these other cases, although they are insufficient to completely describe the observed features of high-temperature superconductivity. BCS is able to give an approximation for the quantum-mechanical many-body state of the system of (attractively interacting) electrons inside the metal. This state is now known as the BCS state. In the normal state of a metal, electrons move independently, whereas in the BCS state, they are bound into Cooper pairs by the attractive interaction. The BCS formalism is based on the reduced potential for the electrons' attraction. Within this potential, a variational ansatz for the wave function is proposed. This ansatz was later shown to be exact in the dense limit of pairs. Note that the continuous crossover between the dilute and dense regimes of attracting pairs of fermions is still an open problem, which now attracts a lot of attention within the field of ultracold gases. - Evidence of a band gap at the Fermi level (described as "a key piece in the puzzle") - the existence of a critical temperature and critical magnetic field implied a band gap, and suggested a phase transition, but single electrons are forbidden from condensing to the same energy level by the Pauli exclusion principle. The site comments that "a drastic change in conductivity demanded a drastic change in electron behavior". Conceivably, pairs of electrons might perhaps act like bosons instead, which are bound by different condensate rules and do not have the same limitation. - Isotope effect on the critical temperature, suggesting lattice interactions - The Debye frequency of phonons in a lattice is proportional to the inverse of the square root of the mass of lattice ions. It was shown that the superconducting transition temperature of mercury indeed showed the same dependence, by substituting natural mercury 202Hg with a different isotope 198Hg. - An exponential increase in heat capacity near the critical temperature also suggests an energy bandgap for the superconducting material. As superconducting vanadium is warmed toward its critical temperature, its heat capacity increases massively in a very few degrees; this suggests an energy gap being bridged by thermal energy. - The lessening of the measured energy gap towards the critical temperature - This suggests a type of situation where some kind of binding energy exists but it is gradually weakened as the temperature increases toward the critical temperature. A binding energy suggests two or more particles or other entities that are bound together in the superconducting state. This helped to support the idea of bound particles - specifically electron pairs - and together with the above helped to paint a general picture of paired electrons and their lattice interactions. BCS derived several important theoretical predictions that are independent of the details of the interaction, since the quantitative predictions mentioned below hold for any sufficiently weak attraction between the electrons and this last condition is fulfilled for many low temperature superconductors - the so-called weak-coupling case. These have been confirmed in numerous experiments: - The electrons are bound into Cooper pairs, and these pairs are correlated due to the Pauli exclusion principle for the electrons, from which they are constructed. Therefore, in order to break a pair, one has to change energies of all other pairs. This means there is an energy gap for single-particle excitation, unlike in the normal metal (where the state of an electron can be changed by adding an arbitrarily small amount of energy). This energy gap is highest at low temperatures but vanishes at the transition temperature when superconductivity ceases to exist. The BCS theory gives an expression that shows how the gap grows with the strength of the attractive interaction and the (normal phase) single particle density of states at the Fermi level. Furthermore, it describes how the density of states is changed on entering the superconducting state, where there are no electronic states any more at the Fermi level. The energy gap is most directly observed in tunneling experiments and in reflection of microwaves from superconductors. - BCS theory predicts the dependence of the value of the energy gap Δ at temperature T on the critical temperature Tc. The ratio between the value of the energy gap at zero temperature and the value of the superconducting transition temperature (expressed in energy units) takes the universal value - independent of material. Near the critical temperature the relation asymptotes to - which is of the form suggested the previous year by M. J. Buckingham based on the fact that the superconducting phase transition is second order, that the superconducting phase has a mass gap and on Blevins, Gordy and Fairbank's experimental results the previous year on the absorption of millimeter waves by superconducting tin. - Due to the energy gap, the specific heat of the superconductor is suppressed strongly (exponentially) at low temperatures, there being no thermal excitations left. However, before reaching the transition temperature, the specific heat of the superconductor becomes even higher than that of the normal conductor (measured immediately above the transition) and the ratio of these two values is found to be universally given by 2.5. - BCS theory correctly predicts the Meissner effect, i.e. the expulsion of a magnetic field from the superconductor and the variation of the penetration depth (the extent of the screening currents flowing below the metal's surface) with temperature. - It also describes the variation of the critical magnetic field (above which the superconductor can no longer expel the field but becomes normal conducting) with temperature. BCS theory relates the value of the critical field at zero temperature to the value of the transition temperature and the density of states at the Fermi level. - In its simplest form, BCS gives the superconducting transition temperature Tc in terms of the electron-phonon coupling potential V and the Debye cutoff energy ED: - where N(0) is the electronic density of states at the Fermi level. For more details, see Cooper pairs. - The BCS theory reproduces the isotope effect, which is the experimental observation that for a given superconducting material, the critical temperature is inversely proportional to the mass of the isotope used in the material. The isotope effect was reported by two groups on 24 March 1950, who discovered it independently working with different mercury isotopes, although a few days before publication they learned of each other's results at the ONR conference in Atlanta. The two groups are Emanuel Maxwell, who published his results in Isotope Effect in the Superconductivity of Mercury and C. A. Reynolds, B. Serin, W. H. Wright, and L. B. Nesbitt who published their results 10 pages later in Superconductivity of Isotopes of Mercury. The choice of isotope ordinarily has little effect on the electrical properties of a material, but does affect the frequency of lattice vibrations. This effect suggests that superconductivity is related to vibrations of the lattice. This is incorporated into BCS theory, where lattice vibrations yield the binding energy of electrons in a Cooper pair. - Little-Parks experiment - One of the first indications to the importance of the Cooper-pairing principle. - Magnesium diboride, considered a BCS superconductor - Little–Parks effect, one of the first indications of the importance of the Cooper pairing principle. - London, F. (September 1948). "On the Problem of the Molecular Theory of Superconductivity". Physical Review. 74 (5): 562–573. Bibcode:1948PhRv...74..562L. doi:10.1103/PhysRev.74.562. - Bardeen, J. (March 1955). "Theory of the Meissner Effect in Superconductors". Physical Review. 97 (6): 1724–1725. Bibcode:1955PhRv...97.1724B. doi:10.1103/PhysRev.97.1724. - Cooper, Leon (November 1956). "Bound Electron Pairs in a Degenerate Fermi Gas". Physical Review. 104 (4): 1189–1190. Bibcode:1956PhRv..104.1189C. doi:10.1103/PhysRev.104.1189. ISSN 0031-899X. - Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (April 1957). "Microscopic Theory of Superconductivity". Physical Review. 106 (1): 162–164. Bibcode:1957PhRv..106..162B. doi:10.1103/PhysRev.106.162. - Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (December 1957). "Theory of Superconductivity". Physical Review. 108 (5): 1175–1204. Bibcode:1957PhRv..108.1175B. doi:10.1103/PhysRev.108.1175. - Mann, A. (July 2011). "High Temperature Superconductivity at 25: Still In Suspense". Nature. 475 (7356): 280–2. Bibcode:2011Natur.475..280M. doi:10.1038/475280a. PMID 21776057. - "BCS Theory of Superconductivity". hyperphysics.phy-astr.gsu.edu. Retrieved 16 April 2018. - Maxwell, Emanuel (1950). "Isotope Effect in the Superconductivity of Mercury". Physical Review. 78 (4): 477. Bibcode:1950PhRv...78..477M. doi:10.1103/PhysRev.78.477. - Ivar Giaever - Nobel Lecture. Nobelprize.org. Retrieved 16 Dec 2010. http://nobelprize.org/nobel_prizes/physics/laureates/1973/giaever-lecture.html - Tinkham, Michael (1996). Introduction to Superconductivity. Dover Publications. p. 63. ISBN 978-0-486-43503-9. - Buckingham, M. J. (February 1956). "Very High Frequency Absorption in Superconductors". Physical Review. 101 (4): 1431–1432. Bibcode:1956PhRv..101.1431B. doi:10.1103/PhysRev.101.1431. - W. A. Little and R. D. Parks, “Observation of Quantum Periodicity in the Transition Temperature of a Superconducting Cylinder”, Physical Review Letters 9, 9 (1962), doi:10.1103/PhysRevLett.9.9 - Gurovich, Doron; Tikhonov, Konstantin; Mahalu, Diana; Shahar, Dan (2014-11-20). "Little-Parks Oscillations in a Single Ring in the vicinity of the Superconductor-Insulator Transition". Physical Review B. 91 (17): 174505. arXiv:1411.5640. Bibcode:2015PhRvB..91q4505G. doi:10.1103/PhysRevB.91.174505. - L. N. Cooper, "Bound Electron Pairs in a Degenerate Fermi Gas", Phys. Rev 104, 1189 - 1190 (1956). - J. Bardeen, L. N. Cooper, and J. R. Schrieffer, "Microscopic Theory of Superconductivity", Phys. Rev. 106, 162 - 164 (1957). - J. Bardeen, L. N. Cooper, and J. R. Schrieffer, "Theory of Superconductivity", Phys. Rev. 108, 1175 (1957). - John Robert Schrieffer, Theory of Superconductivity, (1964), ISBN 0-7382-0120-0 - Michael Tinkham, Introduction to Superconductivity, ISBN 0-486-43503-2 - Pierre-Gilles de Gennes, Superconductivity of Metals and Alloys, ISBN 0-7382-0101-4. - Cooper, Leon N; Feldman, Dmitri, eds. (2010). BCS: 50 Years (book). World Scientific. ISBN 978-981-4304-64-1. - Schmidt, Vadim Vasil'evich. The physics of superconductors: Introduction to fundamentals and applications. Springer Science & Business Media, 2013. - ScienceDaily: Physicist Discovers Exotic Superconductivity (University of Arizona) August 17, 2006 - Hyperphysics page on BCS - BCS History - Dance analogy of BCS theory as explained by Bob Schrieffer (audio recording) - Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016, ISBN 978-3-95806-159-0
Oil, the substance used to power or lubricate a majority of modern machines, is produced in nature by heating and compression of ancient organisms. Petroleum, also called crude oil, is formed from the prehistoric remains of plankton and algae. Over the course of several thousand years these deceased organisms sink into the ground, covered by layers of mud and sediment, and can form into crude oil when specific conditions of heat and pressure are met. Those which do not form into oil form natural gas pockets or into a waxy substance called kerogen. Once a crude oil pocket is located, it can be drilled and pumped out of the ground, then refined into gasoline, motor oil, diesel fuel, outboard motor oil, jet fuel, kerosene and hundreds of other useful products.
The Salado culture is a term used by historians and archaeologists to describe a pre-Columbian Southwestern culture that flourished from c. 1200-1450 CE in the Tonto Basin of what is now the southern parts of the present-day US states of Arizona and New Mexico. Although scholarly debate continues as to the exact origins of the Salado culture, as well as how it disappeared, there is some consensus among scholars and archaeologists that the Salado culture had distinctive art, architectural traditions, and burial practices that distinguish them from their Ancestral Pueblo (Anasazi), Mogollon, and Hohokam neighbors. Among the ancient cultures of the U.S. Southwest, the Salado culture is especially noted for its stunning iconographic designs and pottery production. The US archaeologist Harold Gladwin (1883-1983 CE) was the first to analyze these cultural traits and a shared artistic style in the 1920s CE, and he referred to this indigenous culture as the “Salado.” The name stems from the Salt River (Spanish: Río Salado), which flows through the valley of their cultural genesis. Prehistory & Geography The area in and around the Tonto Basin forms part of a large intermountain basin, which facilitated human settlement as it was rich in natural resources. The Salt River runs through eastern Arizona, slicing through the White Mountains, until it intersects with the Gila River in what is now central Arizona. The land therein is surprisingly fertile, and a diverse array of flora grows in a series of interlinked microenvironments: walnut trees, sycamore, mesquite saguaro cacti, and jojoba are all found in this region. There are even pinyon and juniper bushes at higher elevations, in addition to other flowering plants that produce nuts and fruit. Wild game is thus also plentiful: deer, rabbits, and quail frequent the area. The lands that the Salado culture came to occupy witnessed human inhabitation long before the emergence of the Salado. (Their spectacular emergence is commonly referred to as the “Salado Phenomenon” by academics.) The work of archaeologists in recent years has shown that humans have inhabited the Tonto Basin since c. 5000 BCE. Several Paelo-Indian mammoth kill sites are located in what was once considered the Salado heartland, and there are signs that indigenous peoples constructed small cliff dwellings as early as c. 3500 BCE. Permanent occupation, however, dates from c. 100-600 CE, when the peoples belonging to the Mogollon settled the eastern parts of this region and left pottery shards as evidence of their presence. The Hohokam moved into the Tonto Basin from around what is now the vicinity of the modern city of Phoenix, Arizona between c. 600-750 CE. The Hohokam constructed their ubiquitous pit-houses, built complex irrigation canals, and cultivated maize, squash, beans, and cotton. Despite the presence of pottery remains that attests the fact that the Hohokam occupied the Tonto Basin for at least 300 years, archaeologists and historians are divided whether or not the Hohokam people eventually left the region to return to the Phoenix Basin sometime around c. 1150 CE. Many archaeological remains and artifacts belonging to the Mogollon and Hohokam were destroyed due to the construction of the Theodore Roosevelt Lake reservoir and a masonry dam in 1911 CE. (Unfortunately, the construction of this same lake destroyed innumerable archaeological remnants from the Salado culture as well.) It is generally believed that a high level of cross-cultural exchange occurred in the region between the Mogollon and Hohokam people before the arrival of any newcomers. Formation of the Salado Culture The 12th and 13th centuries CE were pivotal in the formation and subsequent cultural development of the indigenous peoples of the ancient Southwest. Environmental data shows that years of drought were followed by years of torrential rains and flooding across the region of what is present-day Arizona, New Mexico, Colorado, and Utah. Deprivation caused by environmental stresses combined with social chaos and political disorder likely prompted the migration of many peoples, but especially the Ancestral Puebloans, to seek fertile lands near the Little Colorado River (in Arizona), the Rio Grande (in New Mexico), and the Tonto Basin (in Arizona). Between c. 1200-1300 CE, Ancestral Puebloan peoples (and probably some Mogollon peoples as well) entered the Tonto Basin, encountering other Mogollon and Hohokam communities. Here, the three cultures intermixed socially and intermarried, adopting or adapting new cultural practices in turn based on utility and necessity. Archaeologists have debated the genesis of the Salado culture since the 1920s CE. Some explained and theorized the emergence of the Salado culture as a mixture of Mogollon and Hohokam populations, Hohokam and Ancestral Puebloan populations, or even as a subset of the Hohokam cultural tradition. In the last couple of decades, archaeologists have come to view the rise of the Salado culture in ways similar to the perspective first proposed by Harold Gladwin. It is probable that the Salado culture is truly the end result of an amalgamation of the Mogollon, Hohokam, and Ancestral Puebloan cultures and their respective populations - a veritable cultural melting pot in the ancient Southwest. The development of the Salado culture can be best understood then as the result of migration and localized cultural evolution in situ. Salado Culture & History Members of the Salado culture constructed small villages or hamlets as well as shallow pit structures. They also built small, ceremonial platform mounds, irrigation canals, multi-storied pueblo made of adobe, and cliff dwellings by c. 1300 CE. One does find T-shaped doorways like those found in Chaco Canyon, Mesa Verde, Wupatki, and Casas Grandes, which suggests strong Ancestral Puebloan and Mogollon influence in Salado architecture. Curiously though, one does not find kivas at sites associated with the Salado culture, and Salado structures were often surrounded by stone walls, which features prominently in Hohokam architectural traditions. (It is worth noting too that many Salado constructions sit on top of former Hohokam residences in the Tonto Basin.) Storage spaces were set aside for agricultural and artisan goods, and space was generally allocated by purpose. The largest Salado towns once contained as many as 1500 people, and other settlements included impressive compounds that contained 30-100 rooms. At Tonto National Monument in Arizona, one can still see the Upper and Lower Cliff Dwelling, which encompassed over 50 rooms in two-story complexes. Originally occupied from c. 1225-1400 CE and located near what is present-day Globe, Arizona, Besh-Ba-Gowah contained a 200-room multi-storied pueblo. Besh-Ba-Gowah was one of the largest Salado settlements that archaeologists have found. The Tonto Basin may have supported up to 10,000 people during its occupation by the Salado culture although a more precise figure is difficult to estimate. Those who belonged to the Salado cultural group grew corn, cotton, squash and amaranth, as well as beans. They also cultivated agave and used yucca to weave sandals, mats, and baskets. The Salado cultural group traded extensively with their neighbors in the Southwest, and their pottery -- commonly referred to as "Roosevelt Red Ware," "Salado Red Ware," or "Salado Polychrome" -- has been found as far away as Casas Grandes, in what is now Chihuahua, Mexico, where it was highly prized. Salado pottery demonstrates a striking combination of white, black, and red colors in geometrical shapes and lines with additional compositional characteristics. Many archaeologists conclude that among the ceramic traditions of the ancient Southwest, the Salado tradition produced the most widely traded pottery. The Salado cultural group initially buried and cremated their dead. (The Ancestral Puebloans and Mogollon buried their dead, while the Hohokam cremated theirs.) Archaeologists have unearthed the remains of many individuals who were buried in supine positions. Salado dead were laid to rest in plazas or patios; at Besh-Ba-Gowah, archaeologists have uncovered 150 skeletons from its central square. Some graves are filled with what appears to be ritual offerings, including vessels, precious stones or minerals, and even furniture. This has led some archaeologists to theorize that the placement of the dead reflected the indigenous social hierarchy of a Salado community. Little is known about religious customs or practices among the Salado cultural group, as it is difficult for researchers to differentiate religious objects from other artifacts. Collapse of the Salado Culture The disappearance of the Salado culture remains yet another mystery among many in the ancient Southwest. It is known that after c. 1350 CE, climatic changes adversely affected Salado settlements in Arizona and New Mexico. The area in and around the Tanto Basin became drier in the 14th century CE, but there were periods of devastating floods and famine too. It is credible that some inhabitants began to relocate to larger Salado settlements or elsewhere beginning in the late 14th century CE, and this pattern of outwards migration continued or even accelerated in the 15th century CE. Some archaeologists have speculated that many communities collapsed when irrigation fields were destroyed by floods and salization, which hampered agricultural production at Salado farms. This is exactly what happened at Pillar Mound in Arizona, which was deserted after a torrential flood destroyed its irrigation canals. There is some evidence for intercommunal strife at Besh-Ba-Gowah, and violence may have encouraged migration en masse as well. Native American oral traditions tell us that some members of the Salado cultural group migrated north and northeast to join the Hopi and Zuni communities, some joined the pueblos along the Rio Grande in what is modern New Mexico, and others moved south towards Casas Grandes.
By pursuing self-interest individuals end up achieving the greater good. Objectivism is not a difficult idea to enunciate. It is difficult for many to understand because it contradicts conventional wisdom. On an epistemological level Objectivism assumes that everything has a specific nature, and that objects react to outside forces because of that nature1. (This is where Objectivism gets its name.) It accepts the scientific method as a way to explore the universe. In this sense, it is much like pragmatism in that it views the world in a way that says, "what works is what is true". With such established ideas as a premise, Objectivism moves on to state that there are no other forces in the universe, magic, and supernatural processes do not play a part. Because of this, the only way to gain knowledge is through reasoning and experimentation. To take this a step further Objectivists then reject feelings, hunches, faith, and unfounded beliefs as a way of understanding and interacting with the world. Moving From the General to the Individual Ayn Rand was the creator of Objectivist philosophy. It is one of the few popular philosophic movements developed by a woman. She saw the human individual as an end. She saw human freedom as the ideal state of the individual. She proposed that "Rationality is man's basic virtue, and his three fundamental values are: reason, purpose, and self-esteem."2 With this as a basis, she reasoned that since every human is an end in himself he should not be exploited by others and, in turn, should not infringe on the basic rights of others, but should ultimately work for his own "rational self-interest". Rational self-interest might encompass supporting a family or works of charity, and on a more "selfish" level might also include making enough money to buy a sports car. What it would not include is political activism that would exploit one class for the benefit of others. This would violate the ethical consideration that we should not harm others. Ayn Rand then illustrated that the ideal relationship among people is freely trading among themselves "value for value". In this system it is in the best interest of every person to create value that can be exchanged. This becomes a powerful force within society. People are happiest when they produce quality goods or services; people are also happy when they receive these goods and services in exchange for their own. In this way, Objectivism encourages the best in people. Incredible value is then created, by which all members benefit by at least as much as they are willing to put in. If this sounds like capitalism, it is because it is how capitalism works. In this way Objectivism can be seen as an ethical justification for the ideas put forth by Adam Smith, in his game changing book The Wealth of Nations. While Smith works on the macro-level, Rand applies her philosophic system directly to individuals. She sees the great producers as heroic. Objectivism and Politics The problem with society is that certain groups work to gain value without giving any return for it. Objectively, this is an immoral act, if not always a criminal one. It is usually accomplished on a large scale by political means where tax structures redistribute wealth. This has the effect of punishing the producers and creating a disincentive to continue value creation. Ayn Rand understood this and her works illustrate society at war with itself, first in The Fountainhead where the vitality and effect of society's producers even in the face of exploitive governments is amply illustrated, and then in Atlas Shrugged which is about what would happen to the world if the producing class simply "went on strike". Objectivists take a generally libertarian view of politics. Government should minimize its role in every sphere, social, moral, economic. The role of government then is to provide the atmosphere that allows Objectivism to do its job. That is, provide law enforcement and national defense. Skeptics might argue that this system is one of greed where economic power will gather in the hands of a few. History has not born out this criticism. Quite the opposite. In collective societies standards of living generally decline, while the less government involvement there is within a country the better is the standard of living for all. This is because there is a dynamic involved in Objectivism where innovation and continuous striving for better value keeps the economic strata in constant flux. Whereas collectivist systems reward complacency and bureaucratic stagnation. One of the interesting facets of Objectivism is that for it to work it does not require that every citizen be an adherent. In fact, people pursuing their own agenda is largely what Objectivism is all about. The only thing the philosophy requires is that individuals treat each other by a code of ethics. Not that they leave each other alone, but that they not interfere in private affairs where there is no invitation, no trade of value. Esthetics and Objectivism Esthetics is the study of beauty, answering the question, what is art? The Objectivist answer is that art is the interpretation of an individual's view of reality. Ayn Rand called her own version of this "Romantic Reality". The idea was to present people, institutions, and things as they ought to be, to illustrate the ideal, but at the same time constrain them to the here and now. She did this in the context of her writing. However, this idealism could be applied to any art form. Biography of Ayn Rand >> - Objectivism 101 - Ayn Rand and the Essentials of Objectivism
|This article needs additional citations for verification. (May 2007)| The Kreuzer (German: [ˈkʀɔɪtsɐ] ( listen)), in English usually kreutzer, was a silver coin and unit of currency existing in the southern German states prior to the unification of Germany, and in Austria. After 1760 it was made of copper. In 1559 a value of 60 Kreuzer to 1 gulden had been adopted throughout the Southern states of the Holy Roman Empire, but the northern German states declined to join, and used Groschen instead of Kreuzer. The Kreuzer in turn was worth about 4.2 Pfennig, or pennies. Thus one (golden) Gulden was worth 60 Kreuzer, or 252 Pfennig. Later currencies adopted a standard relationship of 240 Pfennig = 60 Kreuzer = 1 Gulden. Following the adoption of the Conventionsthaler in 1754, two distinct Kreuzer came into being. The first, sometimes referred to as the Conventionskreuzer, was worth 1/120 of a Conventionsthaler, valuing the gulden at half a Conventionsthaler. This was used in Austria-Hungary. However, the states of southern Germany adopted a smaller Kreuzer Landmünze worth 1/144 of a Conventionsthaler, thus valuing the Gulden at 5/12 of a Conventionsthaler. In fact, the southern German states issued coins denominated in Kreuzer Landmünze up to 6 Kreuzer Landmünze (equal to 5 Conventionskreuzer) but in Conventionskreuzer for higher denominations. South Germany 1837–1873 The South German Currency Union of 1837 used a system of 60 Kreuzer = 1 Gulden and 1¾ Gulden = 1 Thaler, with the Kreuzer equal to the old Kreuzer Landmünze. These Kreuzer continued in circulation until decimalization, following German unification. Austria-Hungary decimalized in 1857, adopting a system of 100 Kreuzer = 1 Gulden, Austrian Florin or Hungarian forint, 1½ gulden = 1 Vereinsthaler. It was known in Hungarian as krajczár (in modern Hungarian orthography: krajcár), in Czech as krejcar, in Slovak as grajciar, in Slovene as krajcar, and in Romanian as creiţar or crăiţar. |Look up Kreuzer in Wiktionary, the free dictionary.|
United for traditional agriculture and freedom of the plate. HISTORY OF FOIE GRAS Literally translated from French, foie gras means "fat liver," but its origins date back far before French cooking made it a delicacy. The ancient Egyptians hunted and then domesticated geese, and discovered that waterfowl developed large, fatty livers after eating large amounts in preparation for migration. To replicate this naturally occurring large liver, the Egyptians, over 4000 years ago, developed the technique now known as gavage to produce a fattier bird. Colorful relief paintings found on the tombs of aristocratic Egyptians depict the hand-feeding of geese, which served as an important source of nutrition around the Nile region. The practice of gavage spread throughout the Mediterranean, and was adopted by the Greeks and then the Romans, the latter of whom made foie gras into a delicacy in its own right. As Rome expanded territorially, its gastronomic influence also expanded, turning foie gras into a food enjoyed by gourmands, who fattened their ducks explicitly for the purposes of its production. After the fall of Rome, and during the medieval period, it was the Jewish population who kept the tradition of foie gras alive. Goose meat served as an excellent source of nutrition and the animal also provided cooking fat that conformed to Jewish law. Wherever Jews moved across Europe during the Middle Ages, they brought with them the tradition of goose fattening. During the late sixteenth-century Renaissance, classical texts and cookbooks were revisited, and interest in foie gras was once again piqued. In France, Louis XIV, a gourmand, enjoyed haute cuisine, but his tastes began to trickle down to the aspirations of the French middle class. After the French Revolution, with centuries of food admiration so deeply etched into their culture, the bourgeois became the new consumers of cuisine, and they began to frequent a new establishment known as the restaurant. Chefs who had lost their aristocratic patrons cooked instead for Parisian middle class society. Foie gras became popularized in the United States in the late nineteenth century as Americans brought a love of the food back with them from travels in Europe. It was not until the latter part of the twentieth century, however, that American farmers began to produce foie gras here in the U.S. The majority of foie gras consumed by Americans now comes from small North American farms. Demand for foie gras has grown, prompting chefs across the country to add the item to their menus. And though it has enjoyed a long history, foie gras is still produced using the traditional methods passed down by generations – by hand, and with care and attention that small, artisan farmers give to their ducks. A useful timeline of the history of foie gras, beginning in 2498 BC and ending in 2008 AD, can be seen at The Wall Street Journal website. Tomb relief showing Egyptians hand feeding ducks Giza tomb relief, 6th Dynasty, lower left figure 18th Dynasty tomb painting, preparing ducks Tomb of Nabamun, flock of domesticated geese, 1350 BC Closeup Egyptian tomb relief showing the feeding of ducks
The Small Intestine The small intestine is the largest component of the digestive tract and the major site of digestion and absorption. In addition to receiving chyme from the stomach, the initial segment of the small intestine, the duodenum, receives bile from the gall bladder and digestive enzymes from the pancreas. The pancreatic enzymes are produced in an inactive form and only become active in the lumen of the duodenum. The small intestine is divided into three parts, the duodenum (25 cm), the jejunum (2.5 m) and the ileum (3.5 m). The mucosa of the small intestine is highly modified. The luminal surface is completely covered by a number of finger- or leaf-like projections called villi, 0.5-1.5 mm in length. The core of a villus is an extension of the lamina propria, and its surface is covered by a simple columnar epithelium. Opening onto the luminal surface at the bases of the villi are simple tubular structures called intestinal glands or crypts of Lieberkuhn. The crypts extend downward toward the muscularis mucosae. The simple columnar epithelium lining them is continuous with that covering the villi. The predominant cell type of the epithelium is the enterocyte or absorptive cell. Each enterocyte has about 3000 microvilli at its luminal surface, which appear in the light microscope as the fuzzy striated border on the surface of the villi. [Electron microscopy: Microvilli are cylindrical protrusions, about 1 micrometer tall, of the cell membrane enclosing a core of filaments, mostly actin filaments. The actin filaments attach to the plasma membrane at the tip of the microvillus and end in the terminal web near the base of the microvillus. The terminal web consists of actin microfilaments and myosin, and is attached to the zonula adherens of the junctional complex binding epithelial cells to one another near their apical ends.] The villi and microvilli, together with folds in the submucosa called plicae circulares (below), increase the absorptive surface of the small intestine about 600 times. The epithelium of the small intestine consists of the following cell types: - Enterocytesor absorptive cells. These are tall columnar cells with microvilli and a basal nucleus, specialized for the transport of substances. They are bound to one another and other cell types by junctional complexes (zonula occludens or tight junction, zonula adherens, and macula adherens). Amino acids and monosaccharides are absorbed by active transport, monoglycerides and fatty acids cross the microvilli membranes passively. Absorbed substances enter either the fenestrated capillaries in the lamina propria just below the epithelium, or the lymphatic lacteal (most lipids and lipoprotein particles). Enterocytes have a lifespan of about 5-6 days. - Gobletcells. These mucus-secreting cells are the second most abundant epithelial cell. They are found interspersed among the other cell types. Their mucous is a very large glycoprotein that accumulates at the apical end of the cell, rendering it wide. The slender base of the cell hold the nucleus and organelles. Goblet cells usually appear pale or empty due to the loss of their contents upon preparation. However their glycoprotein content can be revealed with special stains (such as the PAS stain in slide #9). The abundance of goblet cells increases from the duodenum to the terminal ileum. Their lifespan is also 5-6 days. - Paneth cells. Paneth cells are found only in the bases of the crypts of Lieberkuhn. These cells have an oval basal nucleus and large, refractile acidophilic granules at their apical end. The granules contain the antibacterial enzyme lysozyme, other glycoproteins, an arginine-rich protein and zinc, an essential trace metal for a number of enzymes. Paneth cells also phagocytize some bacteria and protozoa. They may have a role in regulating intestinal flora. They have a lifespan of about four weeks. Paneth cells are easy to identify with the light microscope. - Enteroendocrine cells. These cells were described in the section on the stomach. In the intestine, they are most often found in the lower part of the crypts but can occur at all levels of the epithelium. Their most abundant products here are cholecystokinin or CCK, which stimulates pancreatic enzyme secretion and gall bladder contraction, secretin, which stimulates pancreatic and biliary bicarbonate secretion, and gastric inhibitory peptide or GIP, which inhibits gastric acid secretion. As in the stomach, these cells are not easily seen without special preparations. - Mor microfold cells. These cells are epithelial cells that overlie Peyers patches and other large lymphatic aggregations. They are relatively flat and their surface is thrown into folds, rather than microvilli. They endocytose antigens and transport them to the underlying lymphoid cells where immune responses to foreign antigens can be initiated. We do not identify M cells in the lab. - Undifferentiated cells. These stem cells are found only at the base of the crypts and give rise to all the other cell types. A cell destined to be a goblet cell or enterocyte undergoes about 2 additional divisions after leaving the pool of stem cells, and migrates from the crypt to the villus. It will be shed at the tip of the villus. Special stains for glycoproteins reveal the glycocalyx at the surface of the intestinal epithelium. The glycocalyx consists of glycoprotein enzymes inserted into the plasma membrane of enterocytes, with their functional groups extending outward. These enzymes are secreted by the enterocytes themselves. They include peptidases and disaccharidases, as well as enterokinase (or enteropeptidase), which converts (inactive) trypsinogen to (active) trypsin. Trypsin, in turn, activates trypsinogen itself as well as other pancratic zymogens. Lymphocytes are sometimes seen in the intestinal epithelium. They are thought to be sampling antigens in the epithelial intercellular spaces. It is believed that they process the antigens before returning to lymphatic nodules in the lamina propria and undergoing blastic transformation, leading to antibody secretion by the newly differentiated plasma cells. Within the lamina propria core of each villus is a lymphatic capillary called a lacteal, as well as numerous capillaries. The lacteal is accompanied by smooth muscle fibres arising from the muscularis mucosae. The smooth muscle in the villus allows it to contract intermittantly, expelling the contents of the lacteal into the lymphatic network surrounding the muscularis mucosae. The lamina propria is very cellular, with numerous lymphocytes, plasma cells, macrophages and eosinophils. Lymphatic nodules are quite common and are an important component of GALT. Lymphatic nodules arising in the lamina propria may extend into the submucosa. The muscularis mucosae may be partially or totally disrupted by the nodules. ***The ileum is characterized by having large aggregates of lymph nodules, called Peyers patches, in the submucosa.*** (Unfortunately we have no slides of the ileum, but you should know the term Peyers patches. Look at pictures in an atlas.) The muscularis mucosae of the small intestine consists of an inner circular and outer longitudinal layer of smooth muscle. The submucosa consists of dense connective tissue. Adipose cells may be present. Both the duodenum and the jejunum are characterized by modifications of the submucosa. (So is the ileum, although its modification, Peyers patches, arises from the lamina propria.) ***The duodenum is distinguished by the presence of Brunners glands, which occupy most of the submucosa.*** In some areas these glands may penetrate the muscularis mucosae to enter the lamina propria. Brunners glands are branched tubuloalveolar glands that produce a clear, viscous, alkaline (pH 8.1-9.3) fluid, containing neutral and alkaline glycoproteins and bicarbonate ions. (Because of the glycoproteins, Brunners glands also react with the PAS that stains the goblet cells and glycocalyx in slide 9). The secretion of Brunners glands protects the proximal small intestine by neutralizing the acidic chyme from the stomach. It brings the pH of the intestinal contents close to the optimum for the pancreatic digestive enzymes delivered to the stomach. ***The jejunum is characterized by the presence of numerous, large folds in the submucosa, called plicae circulares (aka valves of Kerckring).*** The plicae consist of a core of submucosa and the overlying mucosa. They have a semilunar, circular or spiral form and extend about one-half to two-thirds around the circumference of the lumen. Although they may be present in the duodenum and ileum, they are not as large and are not a significant feature in those regions. The muscularis externa is as described under General Structure. The two layers (inner circular for mixing and outer longitudinal for peristalsis) are well organized. Features such as Auerbachs plexus tend to be easy to find. Either a serosa or an adventitia may be present. Figure 21 shows a low power view of the complete wall of the duodenum. The villi appear packed together and only a few (labelled) can be seen distinctly. The crypts at their bases are not identifiable. The mucosa can be distinguished from the submucosa because of the abundance of Brunners glands in the latter; they have picked up the PAS stain and appear as red circles. The muscularis mucosae is not readily seen at this magnification (and is frequently disrupted by Brunners glands); its approximate course is shown by asterisks. There is a tear at the top of the figure where much of the submucosa has torn away from the muscularis externa, whose circular and longitudinal layers can be distinguished. The boundary is indicated by asterisks. The serosa or adventitia is not identifiable. All pictures of the duodenum shown here are taken from slide 9. Figure 22 shows a slightly higher power view of the duodenal mucosa and submucosa. The individual villi can be distinguished, and some crypts can be seen at their bases. Goblet cells, staining red, can be seen in the epithelium of both the villi and the crypts. The glycocalyx can be faintly seen as a pink line running along the surface of the epithelium. The lamina propria forming the core of the villi and lying between the crypts contains numerous lymphocytes and other cells. The muscularis mucosae can be distinguished as a pink band. An abundance of Brunners glands can be seen in the submucosa. They have not broken through to the mucosa in the section shown, and the muscularis mucosae remains undisrupted. Brunners glands open into the intestinal lumen through ducts (not seen here). Figure 23 shows a high power view of the top part of one villus and part of an adjacent villus. The nuclei of the simple columnar epithelium are lined up in a row at the base of the cells. Goblet cells are interspersed among the enterocytes. At this magnification, the glycocalyx can be readily identified. Figure 24 shows a crypt opening into the lumen between the bases of two villi. The epithelium lining the inside of the crypt can be seen to be continuous with that lining the outside of the villi. Enterocytes and goblet cells are present in crypts and villi. The lumen can be seen at the top of the crypt, but is obscured at its base. The base of the crypt ends just beyond the field of view at the bottom left. To the right of the crypt, part of another crypt, sectioned obliquely, is seen as the top half of a circular profile. Many lymphoid cells are seen in the lamina propria. Some blood vessels can also be seen, as can strands of smooth muscle, which arise from the muscularis mucosae and are principally associated with the lacteal (not seen here). Figure 25 shows the regularly-arranged muscularis externa of the duodenum. The inner circular layer is larger than the outer longitudinal layer. An Auerbachs plexus lies between the two layers (barely identifiable at this magnification). The connective tissue of the adventitia (or serosa) lines the outer surface, and a bit of the submucosa (with Brunners glands) is seen beyond the inner circular layer. The big gap between submucosa and muscularis externa is an artifact (tear). Figure 26 shows a high power view of an Auerbachs plexus in the duodenum. Two nerve cells bodies can be seen. Figure 27 is a low power view of a plica circulares in the duodenum. This large fold in the submucosa raises the overlying mucosa with its villi and crypts. Figure 28 shows a low power view of the jejunum (from slide 39). The mucosa, submucosa and muscularis externa are shown completely, a bit of the serosa or adventitia is shown at the lower right. One of the villi (to the right of the labelled ones) is folded over on itself. Note the absence of Brunner's glands in the submucosa. A few blood vessels can be seen at this magnification. No plicae circulares are included in the field of view. In the muscularis mucosae, the circular and longitudinal layers seem reversed; this is a phenomenon of sectioning. Figure 29 is at the same magnification as Figure 28, but shows an area of the jejunum in which a plica is present. Only the mucosa, submucosa and a very small part of the inner layer of the muscularis externa are seen. The plica continues to the right beyond the field of view. Compare this plica to the one shown in the duodenum in Figure 27. The plicae in the jejunum tend to be taller and thinner. They are also more frequent. If you scan your own slide 39 you might find branching plicae or plicae arising from other plicae. Figure 30 shows a medium power view of the mucosa (and a bit of the submucosa) of the jejunum. The muscularis mucosae can be seen as a continuous band. Some of the crypts can be seen to be emptying into the bases of the villi, others, sectioned obliquely, appear as circular profiles. The epithelium appears as a darker red line outlining the villi and continuing into the crypts. The little pale "holes" in the epithelium are goblet cells. Figure 31 shows a high power view of part of a villus. The epithelium is not very edifying as it is sectioned obliquely and is several layers thick. A few goblet cells can nevertheless be distinguished. However, both the central lacteal (as a cross section) and the smooth muscle strand accompanying it can be seen clearly among the lymphocytes in the lamina propria. Figure 32 shows a high power view of some crypts in the jejunum. Paneth cells, with refractile granules, are seen at the bases of the crypts (the only place theyre found). The Paneth cells on your slides show up much more clearly than on this computer image. Some goblet cells are also seen in the epithelium. The other cells are enterocytes (absorptive cells). The muscularis mucosae can be readily seen. A bit of the submucosa is present but is bleached out (as a result of trying to get Paneth cells to show up on computer image). Figure 33 shows an Auerbachs plexus between the two layers of the muscularis externa in the jejunum. This sections happens to have gone through the nucleus and nucleolus of several nerve cell bodies (somata). The nucleus is a paler structure lying in the nerve cell body, the nucleolus appears as a brighter dot. The borders of the nerve cell bodies themselves are a bit hard to see; two of them are outlined with different symbols. Development & Homeostasis| Immunology | Cardiovascular | Respiratory Renal | Endocrine | Reproduction | Musculoskeletal | Gastrointestinal | Self-Study of BasicTissue
This is a perspective view of a scene within Mars' Candor Chasma based on stereo imaging by the High Resolution Imaging Science Experiment (HiRISE) camera aboard NASA's Mars Reconnaissance Orbiter. It shows how the surface would appear to a person standing on top of one of the many hills in the region and facing southeast. The hills in the foreground are several tens of meters to about 100 meters (tens of yards to about 100 yards) wide and several tens of meters or yards tall. The light-toned layers of rock likely consist of material laid down by the wind or under water. The dark-toned material is a layer of windblown sand on the surface. The orientations of these layers were measured in three dimensions in order to understand the region's geologic history. The particular patterns in which these rocks are oriented to the surrounding Candor Chasma are most consistent with the idea that the layers formed as basin-filling sediment, analogous to the sedimentary rocks of the Paradox Basin in southeastern Utah. This implies that these sediments are younger than the formation of the chasm, providing important constraints on the maximum age of groundwater (about 3.7 billion years) within the region. The width of the scene at bottom of the image is approximately 500 meters (1,640 feet). There is no vertical exaggeration. The detailed three-dimensional information of the area comes from a pair of HiRISE observations. Those full observations are available online at http://hirise.lpl.arizona.edu/PSP_003474_1735 and http://hirise.lpl.arizona.edu/PSP_003540_1735.
Language and Prejudice: Tamara M. Valentine Part of the “Longman Topics” reader series, The Language of Prejudice examines the effects language has on societal biases. This brief collection of readings focuses on the way language influences and prejudices society's view on race, gender, age, disabilities, and sexual preferences. Thought-provoking selections ask students to think about timely and relevant issues such as: racial slurs and other offensive language, anti-feminist discourse, and verbal assaults on homosexuals. Divided into seven chapters, each features six or more essays of varying lengths. Brief apparatus helps students write more thoughtfully in response to the selections and to think more critically about the importance of choosing language wisely. “Longman Topics” are brief, attractive readers on a single complex, but compelling, topic. Featuring about 30 full-length selections, these volumes are generally half the size and half the cost of standard composition readers. What people are saying - Write a review Review: Language and Prejudice (a Longman Topics Reader)User Review - Anders - Goodreads Essays from over a hundred writers on how language and prejudice coexist because our language is in and of itself prejudice. Read full review
Geologists draw dozens of types of geologic maps. They want to show the earth as it is deep underground. A Structure Map The map below is a hand-drawn structure map. It is drawn on the top of an oil zone that is about 8000 feet deep. The map is about two miles across. It is similar to a contour or topographic map drawn on the surface of the earth. But, this map shows contours of a structure that is more than a mile underground. The petroleum geologist picks the top of the oil zone in every well that is drilled. She knows the elevation of the ground at the drilling site. For example, if the elevation of the ground is 1000 feet above sea level, and the top of the oil zone is found at 7700 feet, she subtracts 7700 feet from 1000 feet to get a subsea elevation of -6700 feet. The well is spotted on the map and the subsea value is posted bedside it. Then contour lines are drawn on the map to create her visualization of the underground structure. It takes a lot of practice to draw contour maps and make them look nice. In the above case, the structure is shaped like a broad dome…or hill….with the top of the hill at -6550 feet below sea level, and the base of the hill at -6800 feet below sea level. So, the top of the hill is about 250 feet higher than the base! An Isopach Map Here is a different type of map called an isopach map, which was constructed over a small gas field. The squares (or sections) are one mile in length on each side. In this map, the petroleum geologist has contoured the thickness of an individual sandstone. This sandstone is about 45 feet thick in the middle, and thins to 20 feet or less around the edges of the gas field. Another Isopach Map Below is another colorful isopach map. It is contoured on one of the Springer (Pennsylvanian) sands in Oklahoma. This particular sandstone was deposited in the ocean, as a type of sand bar. That gave it a lot of porosity, and now it is a pretty nice gas field. The PG made this map by looking at the porosity of the sandstone in the electric logs of all the wells. Then the PG determined how many feet of the sandstone was producible, or pay in each well. That pay amount was posted in blue alongside the wells. Note that some wells have only a couple feet of pay, while wells toward the center have up to 28 feet of pay. This is why your neighbor may have an oil or gas well, and you do not! The squares shown are “sections,” each a mile square, so the length of this sand body or “field” is about six miles. However, it is only a mile or so wide. A Production Map One more map. This one shows an ancient stream or river channel. The sandstone is about 30 feet thick in the middle of the channel. Production charts have been placed on the map. These show graphically how much oil and gas were produced from each well over the years. The large red numbers show the amount of produced gas. For example, the Soar 1-18 (top middle) has made 3,267,524 thousand cubic feet of gas, or 3.2 billion cubic feet. That’s a good well ! Amount of Gas Abbbreviation used in the Oil Industry 1000 cubic feet of gas = 1 MCF 1 million cubic feet of gas = 1 MMCF or 1000 MCF 1 billion cubic feet of gas = 1 BCF or 1,000,000 MCF A typical house might use only about 4 MCF of gas per month for heating. Assuming 6 months of heating per year, that’s 24 MCF used per year. That means the Soar has produced enough gas in it’s lifetime to heat 136,000 houses for one year !
“There is nothing more contemptible than a bald man who pretends to have hair.” -Martial Back in the late 1800s, astronomy was a fast-developing field, with rapid advances occurring in telescope size and technology, increasingly accurate predictions and public suites of observations, and vast new catalogues of deep-sky objects. But perhaps most famously, in 1781, a discovery of a new planet in our Solar System — Uranus — set the scientific world aflame. What was amazing about Uranus — the first planet discovered since antiquity — is that it provided an immediate testing ground for the most powerful laws governing the Universe at the time: Newton’s law of universal gravitation. If you could figure out how far an object was from the Sun and how quickly it was moving, you should immediately be able to derive its entire orbital path. That’s the power of a scientific theory: the ability to predict the future behavior of a well-understood physical system. In 1821 — forty years after Uranus was first discovered — astronomer Alexis Bouvard published his first table of results that he obtained through keen observations of the new, distant world. Newton’s laws predicted exactly how quickly any massive body should move in orbit around the Sun, and deviations from Kepler’s laws could be calculated due to the masses, distances and positions of all the other known bodies in the Solar System. Yet, with all these taken into account, there was a problem. Uranus appeared to move too quickly as compared to its predicted speed at first, then slowed down to the expected speed, but then slowed down even further, to below its predicted speed. And this appeared to fly in the face of Newton’s theories. Unless, as Bouvard hypothesized, there was an eighth planet out there perturbing the orbit of Uranus! Today’s forgotten hero of astronomy, John Couch Adams (who would have turned 195 today), became incredibly taken with this idea, and devoted the early part of his scientific career to studying this problem. Like many men — and I know this pain myself — he began to bald at a relatively young age. During the 1840s, he attempted to predict exactly where the new, massive, outer world would need to be, in order to give contemporary astronomers a target. It’s very difficult to look for a new, faint point of light in the sky if you don’t know exactly where to look, so getting a precise answer was extremely important. By the mid-1840s, he was in communication with astronomers James Challis and George Airy about his predictions, but the planet failed to turn up. As it was, Adams had fired off a total of six letters in rapid succession in 1845/6 that contradicted one another. There were some mistakes in his work that he was refining, and the six predictions he gave disagreed with one another with a range of 12°! Perhaps heartbreakingly, at least one of his predictions was very much correct. (In fact, Challis himself actually observed Neptune during at least one and possibly on two occasions, mistaking it for a star!) And then every scientist’s nightmare happened to Adams: he was scooped! On August 31, 1846, Frenchman Urbain Le Verrier announced that he had computed the position of where this new planet must be, and sent off a letter to Germany and Johann Galle at the Berlin Observatory. The letter arrived in Berlin on September 23, and it was clear for observing that very evening. Galle and his assistant, d’Arrest, pointed their telescope towards the exact location Le Verrier predicted, and less than 1° away, there it was. Neptune. A new planet. And while many in Britain tried to ascribe more credit to Adams than he was due, Adams himself was incredibly humble about his own role. Submitting to the Royal Astronomical Society in November of 1846: I mention these dates merely to show that my results were arrived at independently, and previously to the publication of those of M. Le Verrier, and not with the intention of interfering with his just claims to the honours of the discovery ; for there is no doubt that his researches were first published to the world, and led to the actual discovery of the planet by Dr. Galle, so that the facts stated above cannot detract, in the slightest degree, from the credit due to M. Le Verrier. As he got older, he continued to work on other important problems, making significant progress on determining the cause of deviations in the Moon’s orbital motion and parallax. Unfortunately, he also adopted the most vile hairdo a balding man can choose: the combover. But it was a spectacular meteor shower in 1866 — the Leonids, on the 33rd anniversary of another spectacular Leonid show — that changed everything. Adams calculated that there must be a cluster of dusty debris moving in an elongated elliptical orbit around the Sun with a period of 33.25 years, taking it out past Jupiter, Saturn, and even Uranus’ orbit. The orbit matched, in fact, the one taken by the newly discovered Comet Tempel-Tuttle, and led to the identification with meteor showers as being caused by cometary debris trails! This was Adams greatest scientific achievement, and shortly thereafter, he reached his greatest achievement in the world of style as well: eschewing the combover and growing a mighty beard instead! Even as an old man shortly before his death, his bearded style was something to be envied by all. So happy birthday to my bearded astronomy hero, John Couch Adams, and may you appreciate his science, his style and his humility on this Throwback Thursday! Enjoyed this? Leave a comment at the Starts With A Bang forum on Scienceblogs!
Bible stories and truths are profound enough to perplex adults, causing theologians to spend a lifetime exploring their depths of meaning and intricacies of arguments surrounding them. However, stories are also simple enough to be communicated at a basic level to young children and even toddlers. Let the developmental level of two- and three-year-olds guide the structure of your lesson; focus on keeping your stories short, action-oriented, expressive and as concrete as possible. Other People Are Reading Put on a multicoloured robe and pretend to be Joseph. Talk about how much you love your coat. Have an assistant pretend to be your brother and act out being jealous of you because of your coat. Act out the story of Joseph being sold into slavery by his brothers, but how, after many years, God made Joseph a powerful ruler in Egypt and allowed him to save his brothers during a time of famine. Print a colouring sheet of Joseph and his coat for the children to colour. Encourage them to use many bright colours. David and Goliath Tell the story of David and Goliath to the children in five to eight minutes. Use pictures and stand on a chair when mentioning Goliath to show how big he was. Talk about how David was just a boy and was small like them but killed Goliath with a slingshot because God was with David. Help the children make David's five stones and bag out of a brown paper sack and play dough to help them remember the story. Understand that two- and three-year-olds think concretely and benefit from pictures, acting and models to learn. Explain the flood story in very simple language, describing that the whole earth was full of people doing mean things to each other, and so God decided to destroy the earth with a flood because of their sins. Draw a picture of a man or have an assistant dress up as a man in a robe and a beard and call him Noah. Explain that Noah did good and believed God so God saved him and his family. Talk about how God told Noah to build an ark. Construct a model of the ark, or buy one online, or bring in pictures of large oil tankers. Show pictures of many different kinds of animals and discuss how God told Noah to fill the ark with two of every kind of bird and animal. Let the children identify the animals and ask them what sounds the animals make; this keeps them interested in the story. Explain that Noah and all the animals were safe in the boat from all the rain and flooding that God brought on the earth. Draw a picture of a house underwater and the ark floating on top. Jesus Makes a Blind Man See There are many suitable lessons from the life and ministry of Jesus that you can teach to two and three year olds. The incident of Jesus giving sight to a man born blind is just one example. Read John 9 to the children in an animated and lively fashion. Read an easy-to-understand version like the English Standard or New Living Version. Ask children to cover their eyes with their hands and try to walk around the room. Explain how this is what it was like for the blind man. Talk about how he never saw trees or birds or butterflies or his parents. Explain how Jesus spit on the ground to make mud, rubbed it on the man's eyes and restored his sight. Explain how happy the man was but that not everyone was happy Jesus healed him. Describe the reaction of the religious leaders to the healing. Explain that they were jealous of Jesus and resented both him and the man He healed. End the story with talking about how the blind man followed Jesus and loved him and we should too. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
Any of a number of substances in blood plasma which are involved in the clotting process, such as factor VIII. More example sentences - People with haemophilia are missing a part of the blood called a clotting factor (Factor VIII), and are sometimes treated by receiving transfusions. - The body's clotting system depends on platelets as well as many clotting factors and other blood components. - In order to have normal hemostasis, the organism needs to maintain a certain plasma level of clotting factors and a certain number of circulating platelets. Definition of clotting factor in: - The US English dictionary
When you try to compress a spring, it pushes back. Materials scientists call that property positive stiffness. Although it is possible to create materials with negative stiffness, they are unstable. One push and they either fly apart or collapse into something with positive stiffness. Now a researcher reports in the 26 March PRL that in theory one can dramatically increase a material’s overall positive stiffness by peppering it with small bubbles of negative stiffness. In other experimental work, he has proven the concept. The surprising advance may one day be used to make more rigid airplane wings, quieter cars, and perhaps even temporary substitute tendons. Mechanical designs are limited by stiffness. No one wants to fly in a plane equipped with wings that flap around like a wet noodle in turbulent air. One way that engineers increase the stiffness of a material is to mix it with a second material to form a composite. But once the geometry and composition are fixed, a thirty-year-old series of mathematical theorems sets a definitive upper limit on the stiffness. Increasing the upper limit requires heavier or more expensive materials; but there is a catch. “All these theorems make the tacit assumption that the stiffnesses are positive,” says University of Wisconsin materials scientist Roderic Lakes. Adding negative stiffness turns all those assumptions on their heads. Instead of exerting the usual restoring force that tries to resist deformation, materials with a negative stiffness draw on the energy stored in their unstable equilibrium state to help the deformation proceed faster. Because they are unstable, negative stiffness materials usually break down rapidly. But small bubbles of negative stiffness can be preserved in a background of positive stiffness material. In some cases, the composite would have a lower overall stiffness, but in his PRL paper, Lakes shows mathematically that the opposite can also happen. “The two phases cooperate with each other in some geometries, and you end up with zero effect,” says Lakes. “But it is not a linear addition, and sometimes the stiffnesses add inversely, giving more positive stiffness.” Although his paper presents a mathematical argument for the effect, it isn’t just a theoretical fantasy. In a separate article Lakes shows that when silicone rubber tubes are buckled like partially squished soda cans, they have negative stiffness. When the buckled tubes are mixed with non-buckled tubes, the composite stiffness rises by orders of magnitude, as his theory predicts. The high stiffness composite also tends to damp vibrations quickly, says Lakes, which makes the materials potentially ideal for airplane wings and cars. “This is quite innovative and exciting,” says Lawrence Katz, a biomedical engineer at Case Western Reserve University in Cleveland, OH. Katz speculates that negative stiffness materials could also be used in medical applications. “If a negative stiffness material could be placed in a tendon under tension, it would expand into a scaffold that leaves room for natural tissue to grow in,” says Katz. Mark Sincell is a freelance science writer based in Houston, TX. - R. S. Lakes, Philos. Mag. Lett. 81, 95-100 (2001).
Classical Historian American Government and Economics with a Focus on Founding Documents and the Free Market How Does This Curriculum Work and How is it Unique? The Classical Historian teaches students to think independently, make decisions, read, write, and speak effectively, AND learn history. The Classical Historian uses a Five Step Method: 1. Students Learn the Tools of the HIstorian. 2. Students are Challenged with Open-Ended Questions. 3. Students Research in a Variety of Secondary and Primary Sources. 4. Students Engage in a Socratic Discussion. 5. Students Write an Analytical Essay. From Adam and Missy Andrews, Center for Literature: "Adam and Missy Andrews have long been searching for effective history curriculum materials with little success. They are happy to report that the search is over! In 2012, they discovered The Classical Historian, a Socratic method for teaching history that shares many of the same principles advocated in Teaching the Classics. Like Teaching the Classics, the Classical Historian is a method for analysis that students can apply to any historical period. With a goal of teaching students to think historically, the Classical Historian shows teachers how to discuss a series of open-ended discussion questions about specific historical events. In answering these questions, students learn a step-by-step process for evaluating evidence, arranging historical data, developing arguments and writing effective essays. The Classical Historian supplies reading guides, primary sources, textbooks and research and discussion questions for five specific periods: ancient, medieval, early American, modern world, and modern American history. Parents will find age-appropriate materials for students in grades 3 and up, with complete year-long courses for grades 6-12. If you are searching for a Teaching the Classics style approach to history, look no further."
An “object of a preposition” pronoun is by definition placed after a preposition, but the other two types of object pronouns and the reflexive pronouns all go in the same place. Because it is common to use more than one of these pronouns at a time, you must know what order to follow: - A Reflexive pronoun is in front of an Indirect object pronoun, and a Direct object is the last pronoun. Use the memory device RID (Reflexive, Indirect, Direct) to remember the order of object pronouns in a sentence. You may have a reflexive pronoun and a direct object or an indirect object and a direct object, but rarely will all three be used together. Note: When two object pronouns begin with the letter l, the first object pronoun is changed to se. This is not a reflexive pronoun although it looks like it. Every sentence must have at least one verb. If there is only one conjugated verb in the sentence, the RID pronouns must be placed in front of the conjugated verb (unless it is a command). In many cases there will be a conjugated verb used with an infinitive or present participle. The good news is that you can consistently place the RID pronouns in front of the conjugated verb no matter how many other verb forms are in the sentence. - La señora Gómez enseña las lecciones. (Mrs. Gomez teaches the lessons.) - La señora Gómez las enseña. (Mrs. Gomez teaches them.) - La señora Gómez se las enseña a los estudiantes. (Mrs. Gomez teaches them to the students.) - Victor no va a traer los regalos a la fiesta. (Victor isn't going to bring the gifts to the party.) - Victor no los va a traer a la fiesta. (Victor isn't going to bring them to the party.) - Victor no se los va a traer a la fiesta a los recién casados. (Victor isn't going to bring them for the newlyweds to the party.) - Orlando lleva a los novios. (Octavio takes the fiances.) - Orlando los lleva. (Octavio takes them.) When the conjugated verb is followed by an infinitive, the RID pronouns may still be placed in front of the conjugated verb or they may be attached to the infinitive. They may not, however, be split up. If there is more than one RID pronoun, the pronouns stay together wherever you choose to place them. - Daniela la quiere llamar. (Daniela wants to call her.) - Daniela quiere llamarla. (Daniela wants to call her.) - Yoruba lo necesita mejorar. (Yoruba needs to improve it.) - Yoruba necesita mejorarlo. (Yoruba needs to improve it.) RID pronouns may be attached to the present participle or placed in front of the conjugated verb. When a sentence is in the present progressive tense, there will be a conjugated form of estar and the present participle form of the verb. RID pronouns may be placed in front of the conjugated form of estar or attached to the end of the verb in the present participle form (ending in ‐iendo or ‐ando). This will mess up the natural stress, so you must add an accent mark to the vowel preceding ‐ndo when you attach any RID pronouns. If you choose to place the object pronouns in front of the conjugated form of estar, you can avoid using a written accent mark. - Juan la está llamando. (Juan is calling her.) - Juan está llamándola. (Juan is calling her.) Other verbs that may be followed by the present participle include ir, quedar, correr, andar, seguir, continuar, and other verbs of motion. Because RID pronouns may be attached to the present participle form of the verb, many mistakenly do the same with a past participle. RID pronouns are never attached to a past participle; they go in front of the conjugated form of haber, which will precede the past participle to create the perfect tenses: - Guada lo ha practicado toda la noche. - Guada has practiced it all night. RID pronouns must be attached to affirmative commands. The addition of even one RID pronoun to the end of a command messes up the natural stress, so you must add an accent mark to what would be the next‐to‐last syllable before adding any pronoun to the end: - Levánten se. (Get up [you guys].) Although it is necessary to attach RID pronouns to the end of an affirmative command, the opposite is true if the command is negative. You must place any RID pronouns in front of a negative command: - No me lo digas. (Don't tell me that.) - Nunca me lo digas. (Never tell me that.)
Teaching Plan Unit 3 travel journal The topic of this unit centers on travel. Travel is very beneficial and attractive activity to the students who can gains a lot, such as, general know ledge of geography, politics, communicating skills and travel experiences. However, before traveling, we should do a lot of preparation. In this unit, language study include some new words and useful expressions and the present continuous tense. Analysis of Learners This book fits the senior one students. The advantages of Ss: 1. They are pretty active about what they are interested in. 2. They are willing to share and demonstrate themselves in public. The shortage of Ss: 1. Owing to the limited amounts of vocabularies , they always have difficulties in expressing thoughts and opinions in appropriate way. 2. Sometimes they devoted themselves to the hot topic which they were enthusiastic about so that they didn’t draw their attention on the knowledge we should learn but just focused on the topic. Teaching Principles We should remember the main idea about the “Task-based language teaching” and “Situational teaching”. Among each part, “tasks” should be the line of the teaching. In order to complete the task, we teachers play a role as coaches instead of players. As a result, students are always the dominant role. Just give them an order, a tip, a hint or a suggestion to finish the task by themselves. Provide them more chances to speak and think. Teaching Aims and Objectives A. Knowledge and Grammar 1. The students learn how to use the present continuous tense to indicate the future. 2. Grasp the key vocabularies, phrases and sentences: journal,fare,transport,finally,cycle,persuade,insist,proper,properly,rapid,I prefer to... B. Ability 1. The students learn how to make a travel plan. 2. The students learn what the travel journal is and try to write their own ones. C. Feelings and Attitudes 1. Sometimes we need not only do material preparation but also spiritual preparation: courage, ambition, willpower and common sense. 2. Encourage students to work as a team. 3. Encourage student to record what they saw, what they heard, what they thought. Motivate them to be active, optimistic and insightful to life by travel. Important Point of Teaching 1. Learn how to correctly use the present continuous tense to indicate the future. 2. Learn how to make a travel plan. Difficult Points of Teaching Make students understand “be+v.ing” can indicate something happening in the future. Teaching Aids A computer, text books, note books, exercise paper, blackboard. Teaching Methods The methods are Task-based Language Teaching, Situational Teaching Counseling-Learning Community Language Learning. Show some pictures about my travel and ask Ss to show their experiences. -------Situational Teaching Pair work: Make a conversation... -------Situational Teaching Summarize the points. -------Task-based Language Teaching Learning Design Unit 3 travel journal 1.Complete the following information card. Dream Travellers Time of their trip Transport Para1. ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Para.2 ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Para3 ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ Practicing Plan Unit 3 travel journal 一. Translation. 1. 他说服他的女儿改变了计划。 ___________________________________________________________________ 2. 他梦想成为一位科学家。 ___________________________________________________________________ 3. 他们坚持要求 Tom 参加这次会议。 ___________________________________________________________________ 4. 一旦我们进入教室,我们应当保持安静。 _____________________________________________________ 二.Fill in the blanks. Ever since our childhood,Tom and I ___________(dream ) of______(take) a bike trip to Qinghai Lake. After__________(graduate)from college,we got a chance. Before we set out,we made a lot of__________(prepare). Tom persuaded me ______(buy) a mountain bike.I trust Tom,because he was very__________(determine).So he insisted that he _______(organize) the trip properly. It was from this special experience _______ I got a lot in my life. copyright ©right 2010-2020。
implantation, in reproduction physiology, the adherence of a fertilized egg to a surface in the reproductive tract, usually to the uterine wall (see uterus), so that the egg may have a suitable environment for growth and development into a new offspring. Fertilization of the egg usually occurs after the egg has left the ovary and is being transported through the fallopian tubes. Male sperm cells deposited in the female reproductive tract travel up to the fallopian tubes to unite with the egg. Once fertilized, the egg begins to undergo a series of cell divisions. The egg takes up to seven days to reach the uterus; by this time the single-celled egg has divided numerous times, so that it is a ball of approximately 200 cells. The uterus has thick walls suitable for egg attachment and growth. A female hormone known as progesterone, secreted by the corpus luteum in the ovary, influences the readiness of the uterine wall for egg implantation. It increases the blood supply in the wall, water content, and secretion of glycogen, a nutrient for the surrounding tissue and developing egg. If the uterus is not first prepared by progesterone, the egg will not attach itself. Progesterone also inhibits muscular contractions in the uterine wall that would tend to reject the adhering egg. When the egg reaches the uterus, it usually remains free in the uterine cavity for about a day. It then attaches to the uterine lining (the endometrium). Cells in the outer surface of the egg grow rapidly once contact is made with the uterine wall. The egg disrupts the surface of the endometrium and actively burrows into the deeper tissue. By the 11th day after fertilization, the egg has completely embedded itself into the endometrium. The product of conception—first the fertilized egg and then the developing child and the placenta—normally remains implanted in the human uterus for nine months.
Scientists Calculate the Half-Life of DNA using Moa Fossils A team of international researchers have used the fossilised bones of three species of extinct giant birds from New Zealand to calculate the half-life of DNA. This study suggests that the double helix can persist in the fossil record under optimum conditions for a lot longer than previously thought with traces of genetic material being detectable in fossils as old as 6.8 million years. The work, which is controversial, if proved valid, rules out DNA from a dinosaur surviving so there would be no chance of cloning a member of the Dinosauria from genetic material recovered from their bones or from biting insects trapped in amber. Sorry Michael Crichton fans, but his wonderful idea about a dinosaur populated “Jurassic Park” is simply not on. A half-life measurement records the time required for a substance to fall to half its measured value at the beginning of a time period. One of this term’s most common applications is in the measurement of radioactive decay. Within palaeontology for example, once a half-life of a substance such as elements from an igneous rock deposited in association with sedimentary strata is calculated, since the rate of decay is exponential, this methodology can permit scientists to accurately date rocks and potentially any fossil material associated with adjacent strata. However, a team of scientists have worked out a half-life for DNA itself. If this measuring technique proves valid then the dating of fossils could become a lot easier and the search for DNA samples within the fossil record can become more targeted. There have been a number of papers published recently that claim to have isolated extremely old, fragmentary DNA, even elements of organic material from dinosaur bones. The need for a reliable model for DNA degradation over the passage of time has been well established. The international team of palaeontologists took core samples from the leg bones of 158 specimens of New Zealand Moas which were very likely to have preserved in them mitochondrial DNA. Radiocarbon dating allowed the team to accurate work out the ages of the fossil material and based on this analysis they were able to demonstrate that DNA decays at a exponential rate over time. The half life of DNA was calculated to be 521 years, much longer than had been demonstrated in other experiments. Fossilised Leg Bones Used in the Study Picture Credit: Morten Allentoft After an animal dies, the cells begin to degrade. Enzymes start to dissolve the bonds between the nucleotides that form the structure of the DNA material contained within the cell. Micro-organisms can speed up the decay process, but it is thought that the presence of ground water and the chemical reactions brought about by its presence, is mostly responsible for the degradation of the genetic material. As groundwater is abundant and found in most strata, so DNA buried in bone undergoing a fossilisation process should, in theory at least, degrade at a set, measurable rate. Calculating the rate of DNA decay has been fraught with difficulties because of the problems of finding enough fossil material with large amounts of DNA with which to use in any scientific study. Compounding this problem is the fact that variable environmental conditions such as temperature, the amount of oxygen present and the level of microbial activity all have a significant impact on the decay of organic material. The research team led by Morten Allentoft (University of Cophenhagen, Denmark) and Michael Bunce (Murdoch University, Perth, Australia) focused their efforts on analysing the DNA from 158 leg bones that belonged to three species of extinct Moa. Moas were giant, flightless birds (nine species) that were native to New Zealand (Dinornithiformes), some species were over 3.5 metres tall. These birds, closely related to Australian Emus, became extinct around 1400 AD. These creatures were once abundant on both North and South Island and the bones used in the study came from three locations all within a few miles of each other. The close proximity of the specimens studied enabled the scientists to nullify the effect of environmental differences between locations as the fossils had been forming in almost identical preservation conditions. The Moa – Helping to Unlock the Half-life of DNA Picture Credit: Frans Lanting/National Geographic Stock All the bones have been dated between 8,000 and 600 years old, the strata in which they were being preserved had a temperature of around thirteen degrees Celsius, helping to keep the results of any DNA half-life measurement consistent over the entire sample. By comparing the specimens’ ages and degrees of DNA degradation, the researchers calculated that DNA has a half-life of 521 years. That means that after 521 years, half of the bonds between nucleotides in the DNA would have broken; after another 521 years half of the remaining bonds would have degraded leaving only a quarter of the original material left; and so on. Using their research, the team have postulated that detectable DNA could be found in fossils as old as 6.8 million years, but this material would be too fragmented to be used in any cloning work. DNA’s ability to survive in the fossil record, or so it seems, has been seriously underestimated. Post doctoral researcher, Morten Allentoft commented: “DNA degrades at a certain rate, and it therefore makes sense to talk about a half-life.” These results may provide a baseline for predicting long-term DNA survival in fossil bone, helping palaeontologists to assess the most likely fossils to have sustainable amounts of DNA within them. In sub-zero conditions, such as those found in Siberia, DNA may have a half-life that it much longer, perhaps as much as 158,000 years. This would potentially permit scientists to extract viable DNA from Ice Age mammals such as Woolly Rhinos and Mammoths. A number of scientists have yet to be convinced by these findings. Eva-Maria Geigl at the Jacques Monod Institute (Paris, France), remains sceptical. She is concerned that the analysis rests on statistically weak evidence, pointing out that the correlation relies heavily on the Moa bones older than 6000 years – when fewer than 10 of the 158 bones are actually as old as this. Michael Bunce defended his work by explaining: “Old fossils are rare and hence there will be less data in this part of the analysis. There is nothing we can do about it other than present what we have at hand – and clearly, the signal is present. The correlation is highly significant.” If genetic material has a predictable time-frame for decay, then palaeontologists may have an opportunity to obtain DNA from important fossil discoveries that reveal life on Earth in the relatively recent geological past. Ever since the Indonesian island of Flores yielded remains of a pygmy-like hominid (Homo floresiensis), nick-named the “hobbit” speculation has been rife that some specimens might contain DNA that would help pin down its position in the human family tree. Scientist remain uncertain whether these little people were descendants from modern humans or the much older H. erectus. Unfortunately, exogenous factors would “cloud” the DNA half-life calculations. The conditions in which the fossils were preserved, the degree of groundwater, the amount of oxygen, the level of microbial activity and the ground temperature would all affect the rate of genetic decomposition. The research scientists conclude that “a host of other factors would come into play“, including the time of year when the organism died. Although the Moa bones used in the study had all been retrieved from very similar environments, the age of the specimens could only account for about 40% of the variation in DNA preservation. The research team admits that the “half-life signal is very noisy“. How a corpse rots and a whole host of other factors would influence the rate of decline of any genetic material once present, based on this work retrievable and workable DNA could potentially be recovered from a fossil that was 1.8 million years old – but beyond this time-frame sufficient DNA recovery to permit effective study would be virtually impossible. Looks like the non-avian dinosaurs are really extinct after all.
The Aboriginal North American Horse IN SUPPORT OF SENATE BILL 2278 (North Dakota) STATEMENT OF CLAIRE HENDERSON BATIMENT DE KONINCK QUEBEC CITY, QUEBEC CANADA 236 Rve Lavergne Quebec, Quebec, G1K-2k2 Canada (February 1, 1991) Traditional Dakota/Lakota people firmly believe that the aboriginal North American horse did not become extinct after the last Ice Age, and that it was part of their pre-contact culture. Scientists know from fossil remains that the horse originated and evolved in North America, and that these small 12 to 13 hand horses or ponys (sic) migrated to Asia across the Bering Strait, then spread throughout Asia and finally reached Europe. The drawings in the French Laseaux caves, dating about 10,000 B.C., are a testimony to their long westward migration. Scientists contend, however, that the aboriginal horse became extinct in North America during what is (known) as the “Pleistocene kill,” in other words, that they disappeared at the same time as the mammoth, the ground sloth, and other Ice Age mammals. This has led anthropologists to assume that Plains Indians only acquired horses after Spaniards accidentally lost some horses in Mexico, in the beginning of the XVIth (16th) century, that these few head multiplied and eventually reached the prairies. Dakota/Lakota Elders as well as many other Indian nations contest this theory, and content that according to their oral history, the North American horse survived the Ice Age, and that they had developed a horse culture long before the arrival of Europeans, and, furthermore, that these same distinct ponys (sic) continued to thrive on the prairies until the latter part of the XIXth (19th) century, when the U.S. government ordered them rounded up and destroyed to prevent Indians from leaving the newly-created reservations. Although there is extensive evidence of this massive slaughter, no definitive evidence has yet been found to substantiate the Elders’ other claim, but there are a number of arguments in favour of the Indian position. Some biologists have pointed out that Elders could indeed be correct, for while the mammoth and other Pleistocene mammals died out during the last Ice Age in both continents, if the horse survived in Eurasia, there is no reason for it to have become extinct in North America, especially given similar environment and climate on the steppes and prairies. In Eurasia, scientists have been able to trace the domestication of the horse through extensive archaeological work, fossil remains, burials, middens (garbage heaps) and artifacts. Such finds have, for instance, enabled them to determine that peoples there ate horses, buried them with notables, and helped them establish that men started riding about 3,500 B.C. By comparison, very little archaeological work has been done on the prairies due in large part to budget constraints. There are also other problems. Whereas the Seythians, for instance, left magnificent gold jewelry which can be dated to 400 B.C., Indian petroglyphs are usually impossible to date accurately. Digs have also concentrated mainly on villages sites, but if prehistoric prairie Indians had the same aversions to eating horsemeat as Dakota/Lakota people have today, then middens (garbage heaps) would not contain the necessary evidence either. It is well known that Dakota/Lakota people have traditionally eaten dogs, and indeed they still do at certain times, but conversely they would no more eat horses than Europeans would eat dogs. So that if both these cultural traits, in regards to horses and dogs, are ancestral, it would be useless to seek horse remains in garbage heaps. Dakota/Lakota burial customs are well documented: Bodies were placed on scaffolds on the prairies, and the bones were collected, cleaned and buried about one year later. As there is no tradition of ceremonial horse burials, with or without humans, one can assume that horses were simply left to die on the prairies where wolves and other scavengers would have efficiently dealt with their carcasses, thereby leaving scientists, once again, with few, if any, remains to discover. So whereas the Eurasian cultural practices insured the survival of physical evidence of the presence and domestication of the horse thousands of years ago, it might well be that pre-contact Indian cultural practices and environmental factors are responsible for the absence of the same evidence on this continent. The Indian pony and its characteristics Dakota/Lakota people have an extensive “horse vocabulary,” and they distinguish between their “own” horses, which among other names they call “sunkdudan,” the small-legged horse, and the European imported horse which they call the long-legged horse, or the American Horse. Between 1984 and 1987, this writer conducted extensive research on the prairies to retrace the itinerary of Louis-Joseph LaVerendrie who left a village site near Bismark, North Dakota, on 23 July, 1642, in an attempt to find the “People of the Horse.” He hoped they would take him to the “Western (China) Sea,” which Europeans had long sought in North America. He traveled 20 days, guided by two Mandans, and on 11 August (1642), he reached the “Mountain of the People of the Horse” where he waited 5 weeks for their arrival. In trying to locate this campsite, this writer used LaVerendrie’ s maps and diaries, as well as other documentation and interviewed numerous Elders and old ranchers. Eventually the site was located in Wyoming, and all of the people he met and traveled with were found to be Lakotas. But these interviews also lead to a wealth of information about the Indian pony. According to Elders, the aboriginal pony had the following characteristics: It was small, about 13 hands, it had a “strait” back necessitating a different saddle from that used on European horses, wider nostrils, larger lungs so that its endurance was proverbial. One breed had a long mane, and shaggy (curly) hair, while another had a “singed mane.” This writer contacted a specialist in mammals and was told the Elders were describing the Tarpan and the Polish Przewalski horses, and that early, independent eyewitness accounts ought to be investigated to confirm the Dakota statements. This lead to further research for creditable European reports. Frederick Wilhelm, Prince of Wurtemberg, a widely respected naturalist, traveled along the Mississippi and up the Missouri in 1823. Prince Wilhelm had studied zoology, botany and related sciences under Dr. Lebret, himself a student of Jussieux, Cavier and Gay-Lussac. An English translation of his diary, titled First Journey to North America in the years 1822 to 1823, was published in 1938 by the South Dakota Historical Society. His memoirs show that he was a keen observer of the fauna and flora wherever he traveled, and it was interesting to note his remarks on the Indian pony’s characteristics: “I interrupt my discourse, to say a few words concerning the horses of the Indians…At a cursory glance one might mistake them for horses from the steppes of eastern Europe. The long manes, long necks, strong bodies and strait back make them appear like the horses of Poland…On the whole the horses of the Indians are very enduring…” (So. Dak. Hist. Soc., XIX:378). He explained this curious phenomena (sic) by postulating that the Indian pony had descended from the Spanish horses, but that it has “degenerated, ” so that “They now resemble the parent (Spanish) stock very little.” If Elders are correct, and if the aboriginal pony did survive, it might well also explain why the ponies so closely resembled the Tarpan or the Polish horses, and perhaps systematic extermination of these ponies by the U.S. government has deprived science of very valuable information. Early French manuscripts: Evidence of a Dakota horse culture prior to 1650 Other evidence exists which also militates in favor of the Indian position, that the aboriginal horse had already been tamed and ridden at the time of (white) contact. The first mention of horses in French manuscripts dates from 1657, and led to an amusing misunderstanding. In August 1657, Pierre Esprit Radisson traveled from Quebec to Onondaga (Syracuse, N.Y.) and during this canoe trip, a 50y/o Iroquois told the explorer of a three-year trip he had taken as a young man to the “great river that divides itself in two” — the Mississippi. (Scull, Gideon G., Voyages, 1943:105). During that trip, he assured Radisson he had seen “a beast like a Dutch horse, that had a long & straight horne in the forehead,” and this horne was some 5 feet long. Following this story Radisson (Scull:107) comments: “Now whether it was a unicorne, or a fibbe made by that wild man, yet (that) I cannot tell, but several others tould me the same, who have seene severall times the same beast, so that I firmly believe it.” Similar stories had also reached the Atlantic Dutch colonies. O’Callaghan’ s Documentary History of New York (Vol. IV:77, 1851), has an engraving of this animal, with the title “Wild Animals of New Netherlands” which has been taken from a Dutch work published in Amsterdam in 1671. The description of this strange bea(st): “On the borders of Canada animals are now and again seen somewhat resembling a horse; they have cloven hoofs, shaggy manes, a horn right out of the forehead, a tail like that of a wild hog, black eyes, a stag’s neck, and love the gloomiest wilderness, are shy of each other, so that the male never feeds with the female except when they associate for the purpose of increase, then they lay aside their ferocity. As soon as the rutting season is past, they again not only become wild but even attack their own.” (Soull, 1943:107, footnote 42.) The clue to the identity of this fabulous beast — whose habits so resembled that of the horse — was finally discovered in the account of the western journey of the explorer Jean Cavelier de la Salle. He reached the Illinois River, in January 1680, and began to construct Fort Crevecoeur, at Piorea, Illinois. On 17 February (1680), two western chiefs visited him, one of whom had a tobacco pouch made of “the foot of a horse with part of the skin of the leg.” Upon being questioned, the chief answered that 5 days west of where he lived “the inhabitants fought on horseback with lances…” From this description, it became evident that the “unicorns” seen by the Iroquois, in his younger days, were simply horses whose riders, perhaps hunting buffalo at a gallop, held their long spears in front of them, between the horse’s ears. As for the “cloven hoofs,” these could well have been the seams of the hide horseshoes Indians sometimes used. Concerning the identity of these expert riders, La Salle thought they were Spaniards: “(These riders) had long hair. This circumstance made us believe that he was speaking of Spaniards from New Mexico because Indians here do not let their hair grow long.” La Salle was at the time with Illinois Indians and had not yet reached the Mississippi, so he had no way of knowing the hairstyle of other Indian nations, but Radisson had gone to “the great river that divides itself in two,” in 1655 and again in 1659, and had met Dakotas. Radisson (Scull, 1943:151) stated: “Those people have their haires long. They reape twice a yeare; they were called Tatanka, that is to say buff (buffalo).” Tatanka is of course the Dakota/Lakota name of the buffalo, and as Radisson states, it was — and still is — the sacred name of the entire “Sioux” nation: Tatanka Oyate, or Pte Oyate, The Buffalo Nation. This passage is interesting because it contains the very first Dakota word ever written by a European, and at the same time gives the true name of the nation, mistakenly called “Sioux” by later Europeans. Were these expert prairie horsemen indeed Dakota/Lakota people as Radisson’s quote states? A manuscript map dated 1673, but probably earlier still, and its lengthy accompanying text indicate that they undoubtedly were. The text states, and the map shows the entire plains area, from Mississippi to the Rocky Mountains as “Manitounie, ” a French transcription of the old Dakota term for prairie, “Manitu,” and “oni,” to live. Hence Prairies Dwellers, a name which the Ojibwa translated into their own language as “Mascoutens Puane,” from “Mascoutens, ” prairie, and “Puane/Boine, ” the still current term for all “Sioux” people. Both names were also translated into French as “Sioux des Prairies,” Prairie Sioux. This same map, part of the Cedex Canadensis, at the Gilchrist Museum in Oklahoma, also shows that near the confluence of the Mississippi and the Missouri, where the Iroquois had seen his “unicorn,” there were indeed “Nations who have horses.” Hence, French manuscripts indicate that the entire prairies, from the Mississippi to the Rockies, were occupied by the Dakota/Lakota people when the first French explorers went there, and that they were skilled horsemen. Prince Frederick of Wurtemberg, who witnessed the Indian technique for hunting buffalo, was dully impressed: “The Indians are extremely bold and daring riders. This is shown especially in their hunting of the buffalo. In this dangerous work it is often hard to say which has the greater skill, the rider or the horse. Since the Indian who manipulates the bow and arrow can not make use of the reins, he must leave the horse entirely to its own discretion. The animal must be carefully trained to approach the bison within a few paces. It must run close to the powerful and often angry bull, and must be ready at all times to evade with the greatest swiftness the charges of the terrible oppoinent.” (S. Dak. Hist. Soc., XIX:379). The interesting point here is that several years prior to 1657, these Prairie Indians were already expert horsemen, having developed remarquable riding and hunting skills. That such expertise was developed by 1650 is remarquable in many ways: It implies that the original 11 head had so multiplied that within a few short years after the horses appeared, these Prairies Dakotas had devised methods for catching them, had learned to tame them, had become expert riders, had devised the most efficient buffalo hunting techniques on horseback, and had also devised techniques for training their horses in these skills. These accomplishments, in so short a time, seem all the more extraordinary when examining the development of similar skills in other areas of the world. Eurasia: A comparison By comparison, in Eurasia the thought of catching and taming horses took thousands of years. An easily accessible Time-Life book, titled First Horseman, by Frank Trippet, describes the reasons why it took thousands of years for people first confronted with horses, to even think of riding them: “The horse’s nature obviously had a lot to do with its initial failure to attract riders. Few men would have been tempted to mount so unpredictable a beast — and fewer still would have been able to stay aboard. (It) had evolved into the most temperamental of all domestic animals, able to elude predators by its sheer speed — the only possible defence on terrain (the Steppe) that offered no place to hide. In body and mind the horse is perfectly designed for flight, not fight. The horse relies on its uncommonly keen eyesight and marvelously acute sense of smell to send it galloping off at any hint of danger. Yet, once trapped, it kicks, bucks, slashes out with its forefeet and bites — often lethally. Also stallions protecting mares and foals will attack.” “Perhaps most important, the untamed horse is naturally likely to go all but beserk when anything lands on its back, simply because it has learned through the millennia that anything is likely to be a predator. Thus, if man had dreamed of riding the horse much earlier than he did, he could hardly have expected a hospitable reception from the animal that one day would become his partner.” (Trippet, 1974:47). Thus Trippet explains why inhabitants of the steppes only began riding about 3,500 B.C., thousands of years after they first appeared on that continent. The same reasons, however, would seem to preclude Prairie Dakotas from being so bold and so skillful, so quickly, not to mention adopting an entirely new horse culture in an exceedingly short time. Yet, another point is even more interesting. It has been argued that Indians had seen Spanish riders, and thus had developed their astonishing equestrian skills, but an example from the Middle East, where a similar situation occurred, shows the time required from the arrival of this “strange beast” into culture, to when its people rode awkwardly for several generations after it first appeared among them, even when experts were there to teach them. “More than a century passed before the Assyrians, learning from more skilled horsemen, like the Scythians, began to feel at home on horseback…For example, Assyrain cavalrymen of the Ninth Century B.C. required aides to ride beside them and manage their mounts so that they would be free to use their weapons.” (Trippet, 1974:51) These examples from other cultures make it difficult to believe that the aboriginal horse had indeed disappeared during the last Ice Age. First, the initial 11 head herd, released in the early XVIth (16th) century, would have had to multiply rapidly in a few years, and to such an extent that horses in sufficient numbers reached the prairies. Then, between that time and at the latest 1650, Dakota/Lakota people would have had to overcome their “mercurial disposition. ” Prince Frederick mentions repeatedly how wild these ponys (sic) were. Then, they would have had to learn to catch horses, tame them, learn to ride, become expert horsemen, devise the best techniques for training their horses in these skills. Compared to the time required by the Assyrians — with expert teachers — and indeed all other Eurasian horse cultures, to develop such accomplishments, the Indian feat seems unbelievable. Trippet (1974:47-48) concluded that: “In light of the horse’s mercurial disposition, its eventual conquest by man seems in many ways a fantastic achievement. ” Even more fantastic, then, is the incredible speed with which a horse culture was developed by the Dakota people. It might, however, be explained if the aboriginal North America horse had survived the Pleistocene, and thus had been part of a long-standing horse culture before the arrival of Europeans, as Dakota/Lakota Elders contend. And, therefore, that they had acquired these skills over the millennia, like their Eurasian counterparts, rather than in the space of one or two generations. Although there as yet (is) no conclusive physical evidence that the aboriginal horse survived the Pleistocene, and was part of the pre-contact civilization on the prairies, there is sufficient evidence — and indeed much more than is presented in this short paper — for experts to seriously reconsider that long-held theory that Prairies Dakotas had to wait for the arrival of the white man to give them horses. According to the Dakota/Lakota oral tradition, the aboriginal horse never became extinct and was part of their pre-contact culture. The horse is aboriginal to North America, and biologists can offer no scientific reasons for its extinction here and not in Eurasia. The absence of post-glacial remains could well be explained by Indian/Dakota cultural traits and environmental factors. The astounding horsemanship of Prairie Dakotas within a few years of the appearance of the “Spanish horse,” argues for this having been a traditional skill. The government pony-extermination policy may well have deprived scientists of unique specimens. Many theories have taken root because of preconceptions and bias. In this instance, no one can deny a long-standing prejudice against Indians, and the efforts which were made to minimize their accomplishments in many areas, and to discount oral history. In light of the above, one might well wonder if the long-held theory regarding the Indian pony is not a survival of these XIXth (19th) century prejudices. Definite proof of the survival of the aboriginal North American horse, and of a pre-contact Indian horse culture, might yet be discovered. Whatever happens, the few remaining Indian ponies should be treasured as part of North Dakota’s unique heritage. Horses definitely originated here, and whether the few remaining ponys (sic) are throwbacks, or are they actual descendants, they are a living testimony of the state’s contribution to the advancement of many civilizations throughout the world. ———— ——— ——— ——— ——— ——— — PRESENTED BY Claire Henderson, Laval University, Quebec, Canada. 2-1-91.
The Learning TouchPoints Secondary Workbook contains 28 student activity sheets reinforcing the TouchPoints for early learners at home. Each page will help the student identify and practice the Touching/Counting Patterns for each numeral. The Learning TouchPoints Secondary Workbook is a part of the teaching strategy, ‘I Do, We Do, and You Do.’ Students will engage in small group or independent practice as they learn and practice the Touching/Counting Patterns at home. This workbook is a valuable addition to the Concrete-Representational-Abstract Continuum of instruction and learning. The use of this workbook aligns with the mathematical standard of using appropriate tools and making use of structure. These 28-page workbooks will be a valuable tool for those students who need a little extra practice.