content
stringlengths 275
370k
|
---|
We have already made it clear that each paragraph represents a particular idea of the composition. Still, when there are several passages within one essay, each being with its idea, how the reader can grasp general message? This is where paragraph transitions come into play.
Transitions are linking elements that show various relations between sentences. Units that make paragraphs be logically connected with each other are conjunctive words, phrases and logical transitions.
You are sure to have used transitional words and phrases in written and oral speech, though you might not know or mind these were linking elements, playing significant role in text coherence. Thus let us place things in context.
Conjunctive words are usually adverbs that may occupy various positions in a sentence. For instance: finally, anyway, hence, still, therefore, thus, undoubtedly, meanwhile, etc. With the help of transition adverbs, each sentence taken separately and passage in general acquire particular meaning and sounding.
Linking phrases perform the same function. Here are some of them: for example, as a result, in addition, said differently, etc.
Despite such rich diversity of conjunctive elements, a writer should use them carefully, having previously specified their meaning.
Sometimes it is not necessary that you should use transition words between paragraphs. Relations between passages can be traced logically in written language and by means of intonation in oral speech.
As far is writing is concerned, you can utilize pronouns and simple words to secure logical connection between paragraphs. For instance: Getting married in young age, couples have immature understanding of love and think little of possible consequences.
The next paragraph may start like this: Those blind with the first feeling usually regret having started family life early.
Pronoun those acts as a logical connector between passages.
So don’t underestimate the role of transitions in the text, since they help you produce smooth piece of writing, which is easy understandable for the reader.
Copyright Writing Tutorials |
Presentation on theme: "Forces Inside Earth (43) Once the elastic limit is passed, the rocks may break When rocks break, they move along surfaces called faults To relieve this."— Presentation transcript:
1 Forces Inside Earth (43)Once the elastic limit is passed, the rocks may breakWhen rocks break, they move along surfaces called faultsTo relieve this stress, the rocks tend to bend, compress, or stretch.If the force is great enough, the rocks will break.
2 An earthquake is the vibrations produced by the breaking of rock. Most earthquakes occur near plate boundaries.Rocks move past each other along faults, rough surfaces catch, movement stops along the fault
3 How Earthquakes OccurStress causes the rocks to bend and change shape.Rocks are stressed beyond their elastic limit, they can break, move along the fault, and return to their original shapes.An earthquake results
4 Types of FaultsThree types of forces—tension, compression, and shear—act on rocks.Tension is the force that pulls rocks apart, and compression is the force that squeezes rocks together.Shear is the force that causes rocks on either side of a fault to slide past each other.
5 Normal FaultsAlong a normal fault, rock above the fault surface moves downward in relation to rock below the fault surface
6 Reverse FaultsReverse faults result from compression forces that squeeze rock.If rock breaks from forces pushing from opposite directions, rock above a reverse fault surface is forced up and over the rock below the fault surface
7 Strike-Slip FaultsAt a strike-slip fault, rocks on either side of the fault are moving past each other without much upward or downward movement.The San Andreas Fault is the boundary between two of Earth’s plates that are moving sideways past each other.
8 Vibrations produced by breaking rock are called __________.A. earthquakesB. eruptionsC. faultsD. liquefaction
9 The type of force that pulls rocks apart is __________.A. compressionB. shearC. surfaceD. tension
10 At a __________ fault, rocks on either side of the fault are moving past each other with little upward or downward movement.A. compressionB. normalC. reverseD. strike-slip |
|Guide 15-1. Bernoulli's Equation and Conservation of Energy
Prerequisite: Study sections 15-6 to 8 of the text.
The text begins the discussion of Bernoulli's equation with the special cases of i) change in speed of fluid without change in height and ii) change in height of fluid without change in speed. Below, we will see that all such situations are governed by two relationships: a) the equation of continuity, and b) the general conservation of energy equation, Wext = ΔEsys.
The equation of continuity is presented in section 15-6. The equation simply states that the mass of fluid passing through any particular cross-sectional slice of a pipe per unit of time is constant. That is, ρAv = constant, where ρ is the density of the fluid, A is the cross-sectional area of the pipe, and v is the velocity of the fluid. If the fluid is incompressible, then the density of the fluid is also a constant. In this case, Av = constant. These results are also expressed by Equations 15-11 and 15-12 in the text, where the subscripts 1 and 2 indicate any two points in time. Note that while the examples in the text show a sudden change in cross-sectional area (see Example 15-7), the continuity equation applies just as well to situations where the cross-sectional area changes continuously. The diagram below illustrates this. A slice of fluid of mass Δm has density ρ1, cross-sectional area A1, and moves at velocity v1 in a particular portion of a pipe. The pipe continuously narrows. At some later time, a slice of the same mass has density ρ2, cross-sectional area A2 and velocity v2. Note that the slices in the diagram are exaggerated in size. Generally, we assume that the slices are so narrow that we can take the cross-sectional area from one side of a slice to the other to be uniform.
From this point on, we'll assume that the fluid is incompressible, and we'll drop the subscripts from the density.
Next we'll develop the conservation of energy equation. The general situation is illustrated in the diagram below. We take as our system the volume of fluid between and including the two slices of mass Δm. Actually, one should think of this as a slice of mass being raised from height y1 to height y2. Due to this change in height, we include the Earth in the system. The slices have equal volume ΔV as a result of the fact that they have the same mass and density. The widths of the slices are Δx1 and Δx2. Since we're taking the slices to be cylinders, then ΔV = A1Δx1 = A2Δx2. The pressures on the left side of the first slice and on the right side of the second slice are P1 and P2 respectively. The corresponding forces on the slices are F1 = P1A1 and F2 = P2A2. Note that F1 points in the direction of fluid displacement, while F2 points opposite the direction of fluid displacement. Note also that these forces are external to the system that we've selected. Hence, they do external work on the system. We need not consider forces exerted to the left on slice 1 or to the right on slice 2, as these forces are internal to the system we've selected. Likewise, we need not consider forces exerted on the mass of fluid between the slices.
It's important to note that we're assuming there are no frictional forces either between the fluid and the walls of the pipe or between different parts of the fluid. Such a situation is called non-viscous flow. For real fluids of low viscosity (water, for example) in pipes, the analysis we give below is reasonably correct as long as the flow is not too fast and the pipe is not too narrow.
Now let's apply conservation of energy. We start with the same equation as always: Wext = ΔEsys. The right-hand side has kinetic energy and gravitational potential energy terms, while the left-hand side has terms for the work done by the external forces. Let's look the latter next. The work done by F1 is W1 = F1Δx1cos0° = F1Δx1, while that done by F2 is W2 = F2Δx2cos180° = -F2Δx2. Therefore, the net external work done on the system is:
Now we equate the external work to ΔEsys.
We should point out the reason why we use Δm in the ΔUg term even though our system includes the fluid between the slices. One can easily imagine that the two slices are positioned adjacent to each other so that there is no mass of fluid between them. This makes no difference to the preceding analysis. So we needn't consider the mass between the slices.
After substituting ΔV = Δm/ρ, we can divide out a common term of Δm.
We arrange terms to place all terms related to slice 1 on the left-hand side of the equal sign and all terms related to slice 2 on the right-hand side.
This is the standard form of Bernoulli's equation. If there is no change in elevation of the fluid, then the equation reduces to
This is Equation 15-14 in the text. If there is a change in elevation but no change in cross-sectional area and hence velocity of the fluid, Bernoulli's equation reduces to
This is Equation 15-15 in the text.
One more example is that of a static fluid for which v1 = v2 = 0. In that case, Bernoulli's equation reduces to . This can be rearranged to give Equation 15-7 in text:
where the depth h = y1 - y2.
The textbook provides examples of the application of the general Bernoulli's equation as well as the special cases mentioned above. However, except for the frequently-used equation P2 - P1 = ρgh, we recommend starting with the general equation and then applying the specific conditions of the problem. Of course, one needs to identify the initial and final states first. Then the pressures, speeds, and elevations associated with those states are identified. One may also need to use the equation of continuity to determine how the speed of the fluid changes. If, for example, the cross-sectional area of the pipe does not change and the fluid is incompressible, then the speed will not change. |
Mount Cayley volcanic field
|Mount Cayley volcanic field|
|District||New Westminster Land District|
|Part of||Garibaldi Volcanic Belt|
|Length||31 km (19 mi)|
|Width||6 km (4 mi)|
|Geology||Lava flows, stratovolcanoes,
The extent of the Garibaldi Volcanic Belt showing the location of the Mount Cayley volcanic field (here referred as the "Mount Cayley area") and its volcanic features.
The Mount Cayley volcanic field is a remote volcanic zone on the South Coast of British Columbia, Canada, stretching 31 km (19 mi) from the Pemberton Icefield to the Squamish River. It forms a segment of the Garibaldi Volcanic Belt, the Canadian portion of the Cascade Volcanic Arc, which extends from Northern California to southwestern British Columbia. Most of the Cayley volcanoes were formed during periods of volcanism under sheets of glacial ice throughout the last glacial period. These subglacial eruptions formed steep, flat-topped volcanoes and subglacial lava domes, most of which have been entirely exposed by deglaciation. However, at least two volcanoes predate the last glacial period and both are highly eroded. The field gets its name from Mount Cayley, the largest and most persistent volcano, located at the southern end of the Powder Mountain Icefield. This icefield covers much of the central portion of the volcanic field and is one of the several glacial fields in the Pacific Ranges of the Coast Mountains.
Eruptions along the length of the field began between 1.6 and 5.3 million years ago. At least 23 eruptions have occurred throughout its eruptive history. This volcanic activity ranged from effusive to explosive, with magma compositions ranging from basaltic to rhyolitic. Because the Mount Cayley volcanic field has a high elevation and consists of a cluster of mostly high altitude, non-overlapping volcanoes, subglacial activity is likely to have occurred under less than 800 m (2,600 ft) of glacial ice. The style of this glaciation promoted meltwater escape during eruptions. The steep profile of the volcanic field and its subglacial landforms support this hypothesis. As a result, volcanic features in the field that interacted with glacial ice lack rocks that display evidence of abundant water during eruption, such as hyaloclastite and pillow lava.
Of the entire volcanic field, the southern portion has the most known volcanoes. Here, at least 11 of them are situated on top of a long narrow mountain ridge and in adjacent river valleys. The central portion contains at least five volcanoes situated at the Powder Mountain Icefield. To the north, two volcanoes form a sparse area of volcanism. Many of these volcanoes were formed between 0.01 and 1.6 million years ago, some of which show evidence of volcanic activity in the past 10,000 years.
- 1 Geology
- 2 Human history
- 3 Volcanic hazards
- 4 See also
- 5 References
- 6 External links
The Mount Cayley volcanic field formed as a result of the ongoing subduction of the Juan de Fuca Plate under the North American Plate at the Cascadia subduction zone along the British Columbia Coast. This is a 1,094 km (680 mi) long fault zone running 80 km (50 mi) off the Pacific Northwest from Northern California to southwestern British Columbia. The plates move at a relative rate of over 10 mm (0.39 in) per year at an oblique angle to the subduction zone. Because of the very large fault area, the Cascadia subduction zone can produce large earthquakes of magnitude 7.0 or greater. The interface between the Juan de Fuca and North American plates remains locked for periods of roughly 500 years. During these periods, stress builds up on the interface between the plates and causes uplift of the North American margin. When the plate finally slips, the 500 years of stored energy are released in a massive earthquake.
Unlike most subduction zones worldwide, there is no deep oceanic trench present along the continental margin in Cascadia. The reason is that the mouth of the Columbia River empties directly into the subduction zone and deposits silt at the bottom of the Pacific Ocean, burying this large depression. Massive floods from prehistoric Glacial Lake Missoula during the Late Pleistocene also deposited large amounts of sediment into the trench. However, in common with other subduction zones, the outer margin is slowly being compressed, similar to a giant spring. When the stored energy is suddenly released by slippage across the fault at irregular intervals, the Cascadia subduction zone can create very large earthquakes, such as the magnitude 9.0 Cascadia earthquake on January 26, 1700. However, earthquakes along the Cascadia subduction zone are less common than expected and there is evidence of a decline in volcanic activity over the past few million years. The probable explanation lies in the rate of convergence between the Juan de Fuca and North American plates. These two tectonic plates currently converge 3 cm (1.2 in) to 4 cm (1.6 in) per year. This is only about half the rate of convergence from seven million years ago.
Scientists have estimated that there have been at least 13 significant earthquakes along the Cascadia subduction zone in the past 6,000 years. The most recent, the 1700 Cascadia earthquake, was recorded in the oral traditions of the First Nations people on Vancouver Island. It caused considerable tremors and a massive tsunami that traveled across the Pacific Ocean. The significant shaking associated with this earthquake demolished houses of the Cowichan Tribes on Vancouver Island and caused several landslides. Shaking due to this earthquake made it too difficult for the Cowichan people to stand, and the tremors were so lengthy that they were sickened. The tsunami created by the earthquake ultimately devastated a winter village at Pachena Bay, killing all the people that lived there. The 1700 Cascadia earthquake caused near-shore subsidence, submerging marshes and forests on the coast that were later buried under more recent debris.
Lying in the middle of the Mount Cayley volcanic field is a subglacial volcano named Slag Hill. At least two geological units compose the edifice. Slag Hill proper consists of andesite lava flows and small amounts of pyroclastic rock. Lying on the western portion of Slag Hill is a lava flow that likely erupted less than 10,000 years ago due to the lack of features indicating volcano-ice interactions. The Slag Hill flow-dominated tuya 900 m (3,000 ft) northeast of Slag Hill proper consists of a flat-topped, steep-sided pile of andesite. It protrudes through remnants of volcanic material erupted from Slag Hill proper, but it represents a separate volcanic vent due to its geographical appearance. This small subglacial volcano possibly formed between 25,000 and 10,000 years ago throughout the waning stages of the Fraser Glaciation.
Cauldron Dome, a subglacial volcano north of Mount Cayley, lies west of the Powder Mountain Icefield. Like Slag Hill, it is composed of two geological units. Upper Cauldron Dome is a flat-topped, oval-shaped pile of at least five andesite lava flows that resembles a tuya. The five andesite flows are columnar jointed and were likely extruded through glacial ice. The latest volcanic activity might have occurred between 10,000 and 25,000 years ago when this area was still influenced by glacial ice of the Fraser Glaciation. Lower Cauldron Dome, the youngest unit comprising the entire Cauldron Dome subglacial volcano, consists of a flat-topped, steep-sided pile of andesite lava flows 1,800 m (5,900 ft) long and a maximum thickness of 220 m (720 ft). These volcanics were extruded about 10,000 years ago during the waning stages of the Fraser Glaciation from a vent adjacent to upper Cauldron Dome that is currently buried under glacial ice.
Ring Mountain, a flow-dominated tuya lying at the northern portion of the Mount Cayley volcanic field, consists of a pile of at least five andesite lava flows lying on a mountain ridge. Its steep-sided flanks reach heights of 500 m (1,600 ft) and are composed of volcanic rubble. This makes it impossible to measure its exact base elevation or how many lava flows constitute the edifice. With a summit elevation of 2,192 m (7,192 ft), Ring Mountain had its last volcanic activity between 25,000 and 10,000 years ago when the Fraser Glaciation was close to its maximum. Northwest of Ring Mountain lies a minor andesite lava flow. Its chemistry is somewhat unlike other andesite flows comprising Ring Mountain, but it probably erupted from a volcanic vent adjacent to or at Ring Mountain. The part of it that lies higher in elevation contains some features that indicate lava-ice interactions, while the lower-elevation portion of it does not. Therefore, this minor lava flow was likely extruded after Ring Mountain formed but when glacial ice covered a broader area than it does to this day, and that the lava flowed beyond the region in which glacial ice existed at that time.
To the north lies Little Ring Mountain, another flow-dominated tuya lying at the northern portion of the Mount Cayley volcanic field. It consists of a pile of at least three andesite lava flows lying on a mountain ridge. Its steep-sided flanks reach heights of 240 m (790 ft) and are composed of volcanic rubble. This makes it impossible to measure its exact base elevation or how many lava flows comprise the edifice. With a summit elevation of 2,147 m (7,044 ft), Little Ring Mountain had its last volcanic activity between 25,000 and 10,000 years ago when the Fraser Glaciation was close to its maximum.
Ember Ridge, a mountain ridge between Tricouni Peak and Mount Fee, consists of at least eight lava domes composed of andesite. They were likely formed between 25,000 and 10,000 years ago when lava erupted beneath glacial ice of the Fraser Glaciation. Their current structures are comparable to their original forms due to the minimal degree of erosion. As a result, the domes display the shapes and columnar joints typical of subglacial volcanoes. The random shapes of the Ember Ridge domes are the result of erupted lava taking advantage of former ice pockets, eruptions taking place on uneven surfaces, subsidence of the domes during volcanic activity to create rubble and separation of older columnar units during more recent eruptions. The northern dome, known as Ember Ridge North, covers the summit and eastern flank of the mountain ridge. It comprises at least one lava flow that reaches a thickness of 100 m (330 ft), as well as the thinnest columnar units in the Mount Cayley volcanic field. The small size of the columnar joints indicates that the erupted lava was cooled immediately and are mainly located on the dome's summit. Ember Ridge Northeast, the smallest subglacial dome of Ember Ridge, comprises one lava flow that has a thickness no more than 40 m (130 ft). Ember Ridge Northwest, the most roughly circular subglacial dome, comprises at least one lava flow. Ember Ridge Southeast is the most complex of the Ember Ridge domes, consisting of a series of lava flows with a thickness of 60 m (200 ft). It is also the only Ember Ridge dome that contains large amounts of rubble. Ember Ridge Southwest comprises at least one lava flow that reaches a thickness of 80 m (260 ft). It is the only subglacial dome of Ember Ridge that contains hyaloclastite. Ember Ridge West comprises only one lava flow that reaches a thickness of 60 m (200 ft).
Mount Brew, 18 km (11 mi) southwest of the resort town of Whistler, is a 1,757 m (5,764 ft) high lava dome composed of andesite or dacite that probably formed subglacially between 25,000 and 10,000 years ago. It contains two masses of rock that might resemble ice-marginal lava flows. These edifices have not been studied in detail but they could have formed during the same period as the Ember Ridge subglacial domes due to their structures, columnar joints and compositions.
The Mount Cayley massif, 2,385 m (7,825 ft) in elevation, is the largest and most persistent volcano in the Mount Cayley volcanic field. It is a highly eroded stratovolcano composed of dacite and rhyodacite lava that was deposited during three phases of volcanic activity. The first eruptive phase started about four million years ago with the eruption of dacite lava flows and pyroclastic rock. This resulted in the creation of the Mount Cayley proper. Subsequent volcanism during this volcanic phase constructed a significant lava dome. This acts like a volcanic plug and composes the lava spines that currently form pinnacles on Cayley's rugged summit. After the Mount Cayley proper was constructed, lava flows, tephra and welded dacite rubble was erupted. This second phase of activity 2.7 ± 0.7 million years ago resulted in the creation of the Vulcan's Thumb, a craggy volcanic ridge on the southern flank of the Mount Cayley proper. Lengthy dissection from an extended period of erosion demolished much of the original stratovolcano. Volcanic activity after this prolonged period of erosion produced thick dacite lava flows from parasitic vents 300,000 years ago that extended into the Turbid and Shovelnose Creek valleys near the Squamish River. This subsequently created two minor parasitic lava domes 200,000 years ago. These three volcanic events in contrast to several others around Cayley in that they do not show signs of interaction with glacial ice.
Immediately southeast of Mount Cayley lies Mount Fee, an extensively eroded volcano containing a north-south trending ridge. It has an elevation of 2,162 m (7,093 ft) and is one of the older volcanic features in the Mount Cayley volcanic field. Its volcanics are undated, but its large amount of dissection and evidence of glacial ice overriding the volcano indicates that it formed more than 75,000 years ago before the Wisconsinan Glaciation. Therefore, volcanism at Mount Fee does not display evidence of interaction with glacial ice. The remaining products from Fee's earliest volcanic activity is a minor portion of pyroclastic rock. This is evidence of explosive volcanism from Fee's eruptive history, as well as its first volcanic event. The second volcanic event produced a sequence of lavas and breccias on the eastern flank of the main ridge. These volcanics were likely deposited when a sequence of lava flows and broken lava fragments erupted from a volcanic vent and moved down the flanks during the construction of a large volcano. Following extensive dissection, renewed volcanism produced a viscous series of lava flows forming its narrow, flat-topped, steep-sided northern limit and the northern end of the main ridge. The conduit for which these lava flows originated from was likely vertical in structure and intruded through older volcanics deposited during Fee's earlier volcanic events. This volcanic event was also followed by a period of erosion, and likely one or more glacial periods. Extensive erosion following the last volcanic event at Mount Fee has created the rugged north-south trending ridge that currently forms a prominent landmark.
Pali Dome, located north and northeast of Mount Cayley, is an eroded volcano in the central Mount Cayley volcanic field. Like Cauldron Dome, it consists of two geological units. Pail Dome East is composed of a mass of andesite lava flows and small amounts of pyroclastic material. It lies on the eastern portion of the Powder Mountain Icefield. Much of the lava flows form gentle topography at high elevations but terminate in finely jointed vertical cliffs at low elevations. The first volcanic activity likely occurred about 25,000 years ago, but it could also be significantly older. The most recent volcanic activity produced a series of lava flows that were erupted when the vent area was not covered by glacial ice. However, the flows show evidence of interaction with glacial ice in their lower units. This indicates that the lavas were erupted about 10,000 years ago during the waning stages of the Fraser Glaciation. The ice-marginal lava flows reach thicknesses of up to 100 m (330 ft). Pali Dome West consists of at least three andesite lava flows and small amounts of pyroclastic material; its vent is presently buried under glacial ice. At least three eruptions have occurred at Pali Dome East. The age of the first volcanic eruption is unknown, but it could have occurred in the past 10,000 years. The second eruption produced a lava flow that was erupted when the vent area was not buried under glacial ice. However, the flow does show evidence of interaction with glacial ice at its lower unit. This indicates that the lavas were erupted during the waning stages of the Fraser Glaciation. The third and most recent eruption produced another lava flow that was largely erupted above glacial ice, but was probably constrained on its northern margin by a small glacier. Unlike the lava flow that was erupted during the second eruption, this lava flow was not impounded by glacial ice at its lower unit. This suggests that it erupted less than 10,000 years ago when the regional Fraser Glaciation retreated.
At least two sequences of basaltic andesite lava flows are deposited south of Tricouni Peak. One of these sequences, known as Tricouni Southwest, creates a cliff on the eastern side of a north-south trending channel with a depth of 200 m (660 ft) adjacent to the High Falls Creek mouth. The eastern flank of the lava flow, outside the High Falls Creek channel, has a more constant structure. Several fine-scale columnar joints and the overall structure of the lava flow suggest that its western portion, along the length of the channel, ponded against glacial ice. Near its southern unit, lava oozed into cracks in the glacial ice. This has been identified by the existence of spire-like cooling formations, although many of these edifices have been destroyed by erosional processes. Other features that indicate the lava ponded against glacial ice include its unusually thick structure and its steep cliffs. Therefore, the Tricouni Southwest lava flow was erupted about 10,000 years ago when the regional Fraser Glaciation was retreating. The explanation for the western portion displaying ice-contact features while the eastern portion does not is likely because its western flank lies in a north-south trending channel, which would have been able to maintain smaller amounts of solar heat than its unsheltered eastern flank. As a result, the western portion of the lava flow records glaciation during a period when the eastern slopes were free from glacial ice.
Tricouni Southeast, another volcanic sequence south of Tricouni Peak, consists of at least four andesite or dacite lava flows that outcrop as several small cliffs and bluffs on extensively vegetated flanks. They reach thicknesses of 100 m (330 ft) and contain small amounts of hyaloclastite. The feeder of their origins has not been discovered but is likely located at the summit of the mound. These lavas form ice-marginal edifies, suggesting that every lava flow was erupted about 10,000 years ago when the vast Cordilleran Ice Sheet was retreating and remains of glacial ice was sparse.
Exposed along the Cheakamus River and its tributaries are the Cheakamus Valley basalts. Although not necessarily mapped as part of the Cayley field, this sequence of basaltic lava flows is geologically similar and comparable in age to volcanic features that are part of this volcanic field. At least four basaltic flows comprise the sequence and were deposited during periods of volcanic activity from an unknown vent between 0.01 and 1.6 million years ago. Pillow lava is abundant along the bases the flows, some of which are underlain by hyaloclastite breccia. In 1958, Canadian volcanologist Bill Mathews suggested that the lava flows were erupted during periods of subglacial activity and traveled through trenches or tunnels melted in glacial ice of the Fraser Glaciation. Mathews based this on the age of the underlying till, the existence of pillow lava close to the bottom of some lavas, indicating subaqueous volcanism, the columnar jointing at the edges of the lavas, indicating rapid cooling, and the absence of apparent palaeogeography.
The andesite lava of Ember Ridge comprises 55% brownish-green volcanic glass with a trachytic matrix of plagioclase. About 35% of Ember Ridge andesite contains phenocrysts of hornblende, augite, plagioclase and orthopyroxene and exist as isolated crystals and clots. A feature south of Ember Ridge, unofficially known as Betty's Bump, comprises andesite with phenocrysts of plagioclase, augite and olivine. Dark brown volcanic glass composes the Betty's Bump andesite as much as 20%. The relationship of Betty's Bump with Ember Ridge is unclear but it likely represents a separate volcanic feature due to its topographic isolation.
Little Ring Mountain at the northern end of the field contains at least 70% brown volcanic glass with isolated phenocrysts of plagioclase. Vesicular textures are up to 5%, suggesting that the lava erupted subaerially. Probable xenocrysts of quartz have been identified at the volcano. At least one xenolith fragment has been found in loose rubble at the volcano and included several quartz xenocrysts and polycrystalline quartz xenoliths in a glassy matrix with trachytic plagioclase.
The dacite volcanics composing Mount Fee contain brown volcanic glass as much as 70% and vesicular textures as much as 15%. About 25% of the volcanics contain crystal content, including plagioclase, hornblende, orthopyroxene, orthoclase and sporadic quartz. The orthoclase crystals are interpreted to represent rock fragments that became enveloped during hardening of the dacitic lavas. A portion of the southwestern flank of Mount Fee comprises no volcanic glass, but rather composed of an abnormal cryptocrystalline matrix. This indicates that it might have developed as part of a subvolcanic intrusion.
At Ring Mountain, andesite comprises 70% brown volcanic glass and vesicular textures as much as 15%. The plagioclastic matrix is trachytic. Augite, biotite, plagioclase and hornblende occur as microphenocrysts and comprise 1% to 7% of the andesite. Small qualities of quartz are common and occur as microxenocrysts. Microxenocrysts of orthoclase likely exist in andesite at Ring Mountain.
Andesite at Slag Hill consists of 70% dark brown volcanic glass with varied degrees of trachytic texture in the plagioclastic matrix and less than 5% of the andesite comprises vesicular textures. Plagioclase, hornblende and augite are mostly in form as phenocrysts and comprise 1% to 10% of the andesite. Orthoclase crystals are found occasionally and they likely represent xenocrysts.
Geothermal and seismic activity
At least four seismic events have occurred at Mount Cayley since 1985 and is the only volcano that has recorded seismic activity in the field. This suggests that the volcano still contains an active magma system, indicating the possibility of future eruptive activity. Although the available data does not allow a clear conclusion, this observation indicates that some volcanoes in the Mount Cayley field may be active, with significant potential hazards. This seismic activity correlates both with some of Canada's most youthful volcanoes and with long-lived volcanoes with a history of significant explosive activity, such as Mount Cayley. Recent seismic imaging from Natural Resources Canada employees supported lithoprobe studies in the region of Mount Cayley that created a large reflector interpreted to be a pool of molten rock roughly 15 km (9.3 mi) below the surface. It is estimated to be 3 km (1.9 mi) long and 1 km (0.62 mi) wide with a thickness of less than 1.6 km (0.99 mi). The reflector is understood to be a sill complex associated with the formation of Mount Cayley. However, the available data does not rule out the probability of it being a body of molten rock created by dehydrating of the subducted Juan de Fuca Plate. It is located just beneath the weak lithosphere like those found under subduction zone volcanoes in Japan.
At least five hot springs exist in valleys near Mount Cayley, providing more evidence for magmatic activity. This includes springs found at Shovelnose Creek and Turbid Creek on the southern flank of Mount Cayley and Brandywine Creek on the eastern flank of the volcanic field. They are generally found in areas of volcanic activity that are geologically young. As the regional surface water percolates downward through rocks below the Mount Cayley field, it reaches areas of high temperatures surrounding an active or recently solidified magma reservoir. Here, the water is heated, becomes less dense and rises back to the surface along fissures or cracks. These features are sometimes referred to as dying volcanoes because they seem to represent the last stage of volcanic activity as the magma at depth cools and hardens.
Several volcanic features in the Mount Cayley field were illustrated by volcanologist Jack Souther in 1980, including Mount Cayley, Cauldron Dome, Slag Hill, Mount Fee, Ember Ridge and Ring Mountain, which was titled Crucible Dome at the time. This resulted in the creation of a geologic map that showed the regional terrain and locations of the volcanoes. The most detailed study of Mount Cayley took place during this period. Little Ring Mountain at the northernmost end of the field had not been studied at the time and was not included on Souther's 1980 map. Ember Ridge at the southern end of the field was originally mapped as a cluster of five lava domes. The sixth lava dome, Ember Ridge Northeast, was discovered by Ph.D. student Melanie Kelman during a period of research in 2001.
The hot springs adjacent to Mount Cayley have made the volcanic field a target for geothermal exploration. At least 16 geothermal sites have been identified in British Columbia, Mount Cayley being one of the six areas most capable for commercial development. Others include Meager Creek and Pebble Creek near Pemberton, Lakelse Hot Springs near Terrace, Mount Edziza on the Tahltan Highland and the Lillooet Fault Zone between Harrison Lake and the community of Lillooet. Temperatures of 50 °C (122 °F) to more than 100 °C (212 °F) have been measured in shallow boreholes on the southwestern flank of Mount Cayley. However, its severe terrain makes it challenging to develop a proposed 100 megawatt power station in the area.
The line of volcanoes has been the subject of myths and legends by First Nations. To the Squamish Nation, Mount Cayley is called tak'takmu'yin tl'a in7in'axa7en. In their language it means "Landing Place of the Thunderbird". The Thunderbird is a legendary creature in North American indigenous peoples' history and culture. When the bird flaps its wings, thunder is created, and lightning originates from its eyes. The rocks that make up Mount Cayley were said to have been burnt black by the Thunderbird's lightning. This mountain, like others in the area, is considered sacred because it plays an important part of their history. The Black Tusk, a pinnacle of black volcanic rock on the north shore of Garibaldi Lake to the southeast, sustains the same name. Cultural ceremonial use, hunting, trapping and plant gathering occur around the Mount Garibaldi area, but the most important resources was a lithic material called obsidian. Obsidian is a black volcanic glass used to make knives, chisels, adzes and other sharp tools in pre-contact times. Glassy rhyodacite was also collected from a number of minor outcrops on the flanks of Mount Fee, Mount Callaghan and Mount Cayley. This material appears in goat hunting sites and at the Elaho rockshelter, collectively dated from about 8,000 to 100 years old.
A number of volcanic peaks in the Mount Cayley field were named by mountaineers that explored the area in the early 20th century. Mount Fee was named in September 1928 by British mountaineer Tom Fyles after Charles Fee (1865–1927), who was a member of the British Columbia Mountaineering Club in Vancouver at the time. To the northwest, Mount Cayley was named in September 1928 by Tom Fyles after Beverley Cochrane Cayley during a climbing expedition with the Alpine Club of Canada. Cayley was a friend of those in the climbing expedition and had died in Vancouver on June 8, 1928 at the age of 29. Photographs of Mount Cayley were taken by Fyles during the 1928 expedition and were published in the 1931 Canadian Alpine Journal Vol XX.
Protection and monitoring
At least one feature in the Mount Cayley volcanic field is protected as a provincial park. Brandywine Falls Provincial Park at the southeastern end of the field was established to protect Brandywine Falls, a 70 m (230 ft) high waterfall on Brandywine Creek. It is composed of at least four lava flows of the Cheakamus Valley basalts. They are exposed in cliffs compassing the falls with a narrow sequence of gravel lying above the oldest lava unit. These lava flows are interpreted to have been exposed by erosion during a period of catastrophic flooding and the valley these lavas are located in is significantly larger than the river within it. The massive flooding that shaped the valley has been a subject of geological studies by Catherine Hickson and Andree Blais-Stevens. It has been proposed that there could have been significant floods during the waning stages of the last glacial period as drainage in a valley further north was blocked with remnants of glacial ice. Another possible explanation is subglacial eruptions created large amounts of glacial meltwater that scoured the surface of the exposed lava flows.
Like other volcanic zones in the Garibaldi Belt, volcanoes in the Mount Cayley field are not monitored closely enough by the Geological Survey of Canada to ascertain how active their magma systems are. This is partly because the field is located in a remote region and no major eruptions have occurred in Canada in the past few hundred years. As a result, volcano monitoring is less important than dealing with other natural processes, including tsunamis, earthquakes and landslides. However, with the existence of earthquakes, further volcanism is expected and would probably have considerable effects, particularly in a region like southwestern British Columbia where the Garibaldi Belt is located in a highly populated area. Because of these concerns, significant support from Canadian university scientists have resulted in the construction of a baseline of knowledge on the state of the Garibaldi volcanoes. This improvement is continuous and will support the understanding to monitor volcanoes in the Mount Cayley field for future volcanism.
The Mount Cayley field is one of the largest volcanic zones in the Garibaldi Belt. Smaller zones include the Garibaldi Lake volcanic field surrounding Garibaldi Lake and the Bridge River Cones on the northern flank of the upper Bridge River. These areas are adjacent to Canada's populated southwest corner where the population of British Columbia is the greatest.
A large volcanic eruption from any volcanoes in the Mount Cayley field would have major effects on the Sea-to-Sky Highway and municipalities such as Squamish, Whistler, Pemberton and probably Vancouver. Because of these concerns, the Geological Survey of Canada is planning to create hazard maps and emergency plans for Mount Cayley, as well as Mount Meager north of the volcanic field, which experienced a major volcanic eruption 2,350 years ago similar to the 1980 eruption of Mount St. Helens.
Like many other volcanoes in the Garibaldi Volcanic Belt, Mount Cayley has been the source for several large landslides. To date, most geological studies of the Mount Cayley field have focused on landslide hazards along with geothermal potential. A major debris avalanche about 4,800 years ago dumped 8 km2 (3.1 sq mi) of volcanic material into the adjacent Squamish valley. This blocked the Squamish River for a long period of time. Evans (1990) has indicated that a number of landslides and debris flows at Mount Cayley in the past 10,000 years might have been caused by volcanic activity. Since the large debris avalanche 4,800 years ago, a number of more minor landslides have occurred at it, including one 1,100 years ago and another event 500 years ago. Both landslides ultimately blocked the Squamish River and created lakes upstream that lasted for a limited amount of time. In 1968 and 1983, a series of landslides took place that caused considerable damage to logging roads and forest stands, but did not result in any casualties. Future landslides from Mount Cayley and potential damming of the Squamish River are significant geological hazards to the general public, as well as to the economic development in the Squamish valley.
Eruptive activity in the Mount Cayley volcanic field is typical of past volcanism elsewhere in the Garibaldi Belt. Large earthquakes would occur under the volcanic field weeks to years in advance as molten rock intrudes through the Earth's rocky lithosphere. The extent of earthquakes and the local seismographs in this region would warn the Geological Survey of Canada and possibly cause an upgrade in monitoring. While molten rock breaks through the crust, the size of the volcano vulnerable to an eruption would probably swell and the area would rupture, creating much more hydrothermal activity at the regional hot springs and the formation of new springs or fumaroles. Small and probably significant rock avalanches may result and could dam the nearby Squamish River for a limited amount of time, such as those that occurred in the past without seismic activity and deformation related to magmatic activity. At some point the subsurface magma will produce phreatic eruptions and lahars. At this time Highway 99 would be out of service and the residents of Squamish would have to travel away from the eruptive zone.
While molten rock comes closer to the surface it would most likely cause more fragmentation, triggering an explosive eruption that could produce an eruption column with an elevation of 20 km (12 mi) and may be sustained for 12 hours. A well-documented explosive eruption in the Garibaldi Belt with such force is the eruption of Mount Meager 2,350 years ago, which deposed ash as far east as Alberta. This would endanger air traffic and would have to take another route away from the eruptive zone. Every airport buried under pyroclastic fall would be out of service, including those in Vancouver, Victoria, Kamloops, Prince George and Seattle. The tephra would destroy power transmission lines, satellite dishes, computers and other equipment that operates on electricity. Therefore, telephones, radios and cell phones would be disconnected. Structures not built for holding heavy material would likely demolish under the weight of the tephra. Ash from the eruption plume would subside above the vent area to create pyroclastic flows and would travel east and west down the nearby Cheakamus and Squamish river valleys. These would likely have significant impacts on salmon in the associated rivers and would cause considerable melting of glacial ice to produce debris flows that may extend into Daisy Lake and Squamish to cause significant damage. The eruption column would then travel eastward and extract air travel throughout Canada from Alberta to Newfoundland and Labrador.
Explosive eruptions may decrease and be followed by the eruption of viscous lava to form a lava dome in the newly formed crater. Precipitation would frequently trigger lahars and these would continuously create problems in the Squamish and Cheakamus river valleys. If the lava dome continues to grow, it would eventually rise above the crater rim. The lava would be cooling and expanding then may produce landslides to create a massive zone of blocky talus in the Squamish river valley. While the dome of lava grows, it would frequently subside to create large pyroclastic flows that would again travel down the adjacent Squamish and Cheakamus river valleys. Tephra swept away from the pyroclastic flows would create ash columns with elevations of at least 10 km (6.2 mi), repeatedly depositing tephra on the communities of Whistler and Pemberton and again disrupting regional air traffic. Lava of the unstable dome may occasionally create minor pyroclastic flows, explosions and eruption columns. The community of Squamish would be abandoned, Highway 99 would be out of service and destroyed, and traffic adjacent to Vancouver, Pemberton and Whistler would remain forced to travel along a route to the east that is more lengthy than Highway 99.
Eruptions would likely continue for a period of time, followed by years of decreasing secondary activity. The solidifying lava would occasionally collapse portions of the volcano to create pyroclastic flows. Rubble on the flanks of the volcano and in valleys would occasionally be released to form debris flows. Major construction would be needed to repair the community of Squamish and Highway 99.
- List of Cascade volcanoes
- List of volcanoes in Canada
- Geology of British Columbia
- Callaghan Valley
- Lillooet Ranges
- Squamish volcanic field
- Volcanism of Western Canada
- Kelman, M.C.; Russell, J.K.; Hickson, C.J. (2001). Preliminary petrography and chemistry of the Mount Cayley volcanic field, British Columbia. Geological Survey of Canada. 2001-A11 (Natural Resources Canada). pp. 2, 3, 4, 7, 8, 14. ISBN 0-662-29791-1. Retrieved 2014-07-27.
- "Cascadia Subduction Zone". Geodynamics. Natural Resources Canada. 2008-01-15. Retrieved 2010-03-06.
- "Pacific Mountain System – Cascades volcanoes". United States Geological Survey. 2000-10-10. Retrieved 2010-03-05.
- Dutch, Steven (2003-04-07). "Cascade Ranges Volcanoes Compared". University of Wisconsin. Retrieved 2010-05-21.
- "The M9 Cascadia Megathrust Earthquake of January 26, 1700". Natural Resources Canada. 2010-03-03. Retrieved 2010-03-06.
- "Slag Hill". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-04.
- "Slag Hill tuya". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-08.
- "Cauldron Dome". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-07.
- "Ring Mountain (Crucible Dome)". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-07.
- "Little Ring Mountain". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-08.
- "Ember Ridge North". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-28.
- "Ember Ridge Northeast". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-28.
- "Ember Ridge Northwest". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-28.
- "Ember Ridge Southeast". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-28.
- "Ember Ridge Southwest". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-28.
- "Ember Ridge West". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-28.
- "Mount Brew". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-04-16.
- Smellie, J.L.; Chapman, Mary G. (2002). Volcano-Ice Interaction on Earth and Mars. Geological Society of London. p. 201. ISBN 1-86239-121-1.
- "Garibaldi Volcanic Belt: Mount Cayley volcanic field". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-04-07. Retrieved 2010-04-12.[dead link]
- Wood, Charles A.; Kienle, Jürgen (2001). Volcanoes of North America: United States and Canada. Cambridge, England: Cambridge University Press. p. 142. ISBN 978-0-521-43811-7. OCLC 27910629.
- "Mount Fee". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-03.
- "Pali Dome East". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-07.
- "Pali Dome West". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-07.
- "Tricouni Southwest". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-05-16.
- "Tricouni Southeast flows". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-05-16.
- Stelling, Peter L.; Tucker, David Samuel (2007). "Floods, Faults, and Fire: Geological Field Trips in Washington State and Southwest British Columbia". Current Research, Part A (Geological Society of America): 12, 13, 14. ISBN 978-0-8137-0009-0.
- Etkin, David; Haque, C.E.; Brooks, Gregory R. (2003-04-30). An Assessment of Natural Hazards and Disasters in Canada. Springer Science+Business Media. pp. 579, 580, 582. ISBN 978-1-4020-1179-5. Retrieved 2014-07-27.
- "Volcanology in the Geological Survey of Canada". Volcanoes of Canada. Natural Resources Canada. 2007-10-10. Retrieved 2010-04-13.[dead link]
- Monger, J.W.H. (1994). "Character of volcanism, volcanic hazards, and risk, northern end of the Cascade magmatic arc, British Columbia and Washington State". Geology and Geological Hazards of the Vanvouver Region, Southwestern British Columbia. Natural Resources Canada. pp. 232, 236, 241. ISBN 0-660-15784-5.
- P.T.C.; Clowes, R.M. (1996). "Seismic reflection investigations of the Mount Cayley bright spot: A midcrustal reflector beneath the Coast Mountains, British Columbia". Journal of Geophysical Research: Solid Earth (American Geophysical Union) 101 (B9): 20119–20131. ISSN 0148-0227.
- "Callaghan Lake Provincial Park: Background Report" (PDF). Terra Firma Environmental Consultants. 1998-03-15. p. 6. Retrieved 2010-04-27.
- "Geysers, Fumaroles, and Hot Springs". United States Geological Survey. 1997-01-31. Retrieved 2010-04-27.
- "BC Hydro Green & Alternative Energy Division" (PDF). BC Hydro. 2002. p. 20. Retrieved 2010-04-27.
- Yumks; Reimer, Rudy (April 2003). "Squamish Traditional Use Study: Squamish Traditional Use of Nch'kay Or the Mount Garibaldi and Brohm Ridge Area" (PDF). Draft. First Heritage Archaeological Consulting. p. 17. Retrieved 2010-03-30.
- Reimer/Yumks, Rudy. "Squamish Nation Cognitive Landscapes" (PDF). McMaster University. pp. 8, 9. Retrieved 2008-05-19.
- "Mount Fee". BC Geographical Names Information System. Government of British Columbia. Retrieved 2010-07-22.
- "Mount Cayley". BC Geographical Names Information System. Government of British Columbia. Retrieved 2010-07-22.
- "Monitoring volcanoes". Volcanoes of Canada. Natural Resources Canada. 2009-02-26. Retrieved 2010-03-24.
- "Garibaldi volcanic belt". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-04-02. Retrieved 2010-02-20.
- G. Evans, S.; Brooks, G. R. (1992). "Prehistoric debris avalanches from Mount Cayley volcano, British Columbia:1 Reply". Natural Resources Canada. p. 1346. Retrieved 2010-03-03.
- Monger, J.W.H. (1994). "Geology and geological hazards of the Vancouver region, southwestern British Columbia" (PDF). Natural Resources Canada. pp. 270, 272. Retrieved 2010-04-26.
- "Photo Collection". Landslides. Natural Resources Canada. 2007-02-05. Retrieved 2010-03-03.
- "Garabaldi volcano belt: Mount Meager volcanic field". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-04-01. Retrieved 2010-05-12.
|Wikimedia Commons has media related to Mount Cayley volcanic field.|
- "Garibaldi Volcanic Belt". Map of Canadian volcanoes. Natural Resources Canada. 2005-08-20. Retrieved 2010-07-30.
- "Garibaldi Volcanic Belt (Mount Cayley area)". Map of Canadian volcanoes. Natural Resources Canada. 2005-08-20. Retrieved 2010-07-30. |
Scientists working at Hewlett-Packard have announced that they have discovered a fourth basic type of electrical circuit – one that might create a computer that "remembers" where it was and doesn’t need to boot up.
Electronics theory recognizes three fundamental elements of a passive circuit: resistors, capacitors and inductors. Leon Chua, a scientist at U.C. Berkeley in the 1970s, posed that there should also be a fourth fundamental element known as a memory resistor, or memristor, and he proved the mathematical equations for it. A team of scientists at HP has now proven the existence of “memristance.” The team, led by Stanley Williams, claims that memristance properties are very different from any other electrical device. Williams and his team have developed a mathematical model and a physical example of a memristor, which they describe in the journal, Nature.
Williams compared memristor properties to water flowing through a garden hose. In a regular circuit, water flows from more than one direction. But in a memory resistor, the hose “remembers” which direction the water/current is flowing from, and it expands in that direction to improve the flow. Likewise, if water or current flows from the other direction, the hose/current shrinks from that direction. According to Williams, "It remembers both the direction and the amount of charge that flows through it. ...That is the memory.”
It's very different from any other electrical device," Williams said of his memristor. "No combination of resistor, capacitor or inductor will give you that property." Williams and the HP team indicated that the memristor finding could lead to development of a new kind of computer memory that would not need to boot up. DRAM that is used on conventional computers is lost when power to a computer is turned off, and when the computer is turned back on it must be accessed from the hard drive. But with the use of the memristor in a memory circuit the computer would not lose its place, even after the power was off and then turned back on.
According to Williams, "It's essential that people understand this to be able to go further into the world of nanoelectronics. It turns out that memristance, this property, gets more important as the device gets smaller. That is another major reason it took so long to find," Williams said. |
Hazardous substances are classified based only on health effects (whether they are immediate or long term), while dangerous goods are classified according to their immediate physical or chemical effects, such as fire, explosion, corrosion and poisoning, affecting property, the environment or people.
"Hazardous Substances" have the potential to harm human health. They may be solids, liquids or gases; they may be pure substances or mixtures. When used in the workplace, these substances often produce vapours, fume, dusts and mists. There are many industrial, laboratory and agricultural chemicals which are classified as hazardous. Hazardous substances may cause immediate or long-term health effects. Exposure could result in:
- chemical burns;
- birth defects; or
- diseases of certain organs such as the skin, lungs, liver, kidneys and nervous system.
Dangerous goods are substances that may be corrosive, flammable, explosive, spontaneously combustible, toxic, oxidising, or water-reactive. These goods can be deadly and can seriously injure or kill people, damage property and the environment. Numbers of dangerous substances are covered under the Dangerous Goods Act 1985 and the Dangerous Goods (Storage and Handling) Regulations 2012, as well as other regs covering transport of these substances. For more information, go to the Dangerous Goods topic information page on the WorkSafe website.
Hazardous substances and dangerous goods are covered by separate legislation, each focussing on controlling the different risks associated with them. Many hazardous substances are also classified as dangerous goods, so both pieces of legislation apply to these.
Hazardous substances are defined in the Regulations as being either listed on the Hazardous Substances Consolidated Lists (Alphabetical or according to CAS number) or fitting the description (meeting the criteria) of a hazardous substance according to the Approved Criteria for Classifying Hazardous Substances [NOHSC:1008(2004] 3rd Edition and/or have National Exposure Standards declared under the Adopted National Exposure Standards for Atmospheric Contaminants in the Occupational Environment [NOHSC:1003(1995)].
The lists can be accessed from the Hazardous Substances Information System (HSIS) website.
To find out whether a substance is hazardous, check either the label or the Material Safety Data Sheet (MSDS).
- The full text of the Dangerous Goods Act and Regulations is is accessible from the Victorian Government Legislation Repository website. For the regs, click "Statutory Rules" and then "D" and the full list will come up.
- A number of Fact Sheets on Dangerous Substances(available in a number of different languages) from the European Agency for Safety and Health at Work:
- Factsheet 33: An introduction to dangerous substances in the workplace
- Factsheet 34: Elimination and substitution of dangerous substances
- Factsheet 35: Communicating information about dangerous substances
- All factsheets can be accessed from this page.
Last amended June 2015 |
The History of Kwanzaa
A non-religious holiday, Kwanzaa celebrates African-American heritage, pride, community, family, and culture. The seven-day festival commences the day after Christmas and culminates on New Year's Day.
Inspired by the civil rights struggles of the 1960s and based on ancient African celebrations, Kwanzaa has become increasingly popular over the last decade. More than 20 million people celebrate in the United States, Canada, England, the Carribean and Africa.
Kwanzaa's ancient roots lie in African first-fruit harvest celebrations, from which it takes its name. The word Kwanzaa is derived from the Swahili phrase "matunda ya kwanza," which means "first fruits."
Those roots are the foundation on which the modern holiday was built. Maulana Karenga, an African-American scholar and activist, conceived Kwanzaa in 1966 following the Watts riot. Currently, Karenga is chairman of the Department of Black Studies at California State University at Long Beach.
Karenga says Kwanzaa is organized around five fundamental activities common to other African first-fruit celebrations:1
- the ingathering of family, friends, and community;
- reverence for the creator and creation (including thanksgiving and recommitment to respect the environment and heal the world);
- commemoration of the past (honoring ancestors, learning lessons and emulating achievements of African history);
- recommitment to the highest cultural ideals of the African community (for example, truth, justice, respect for people and nature, care for the vulnerable, and respect for elders); and
- celebration of the "Good of Life" (for example, life, struggle, achievement, family, community, and culture).
Sources: Associated Press, Los Angeles Times, Chicago Tribune, United Press International, San Francisco Chronicle, Encarta 96 Encyclopedia
1From Karenga's contribution to Encarta 96 Encyclopedia
© 1996 Cable News Network, Inc.
All Rights Reserved. |
Expendable Launch Vehicle Operations
While development of the lunar program moved forward, unmanned launches continued unabated from Cape Canaveral. Part of the original Vanguard Naval Research Laboratory team became the Launch Operations Branch of the Goddard Space Flight Center after NASA was established in 1958. This team became a part of the Kennedy Space Center in October 1965.
The team completed its planned series of Vanguard launches while NASA was developing more powerful vehicles. Because they could be used only once and their components were not recoverable, these unmanned rockets also were referred to as expendable launch vehicles (ELVs). In April 1959, NASA awarded a contract to the Douglas Aircraft Co. for the design, fabrication, testing and launch of an improved version of the three-stage Thor-Able expendable booster, to be called Thor-Delta (later simply the Delta). The first stage was a Thor Intermediate Range Ballistic Missile and the second and third stages were modifications of the second and third stages of the Vanguard. The Goddard group was given the responsibility for supervising the checkout and launch of Delta vehicles. They were still performing these tasks when phased over into KSC.
The Delta became the "workhorse" of NASA's ELV family, undergoing a number of upgrades in power until it could place 2,800 pounds (1,270 kilograms) in geosynchronous transfer orbit, more than 20 times its original payload capability. Of the 182 Delta launches NASA conducted through 1988, 170 were successes.
While the Cape was the primary Delta launch site, many were also launched from the West Coast. NASA used launch pads at Vandenberg Air Force Base in California to achieve polar or other north-south orbits required for certain meteorological and Earth resources satellites, as well as cosmic explorer spacecraft.
The second vehicle added to the NASA unmanned medium launch vehicle category was the Thor-Agena. The Agena was a powerful upper stage developed for the Air Force by the Lockheed Propulsion Co. It used liquid propellants and had inflight shutdown and restart capabilities. A test flight on Jan. 15, 1962, achieved most of its objectives. After two test flights, a Thor-Agena placed the huge Echo 2 balloon into orbit on Jan. 25, 1964. This spherical balloon, 135 feet (41.1 meters) in diameter, remained in orbit for two years and was the object of numerous early communications experiments by scientists from the United States, the United Kingdom, and the Soviet Union. The Thor-Agena remained in service until April 1970, with a total of 12 operational missions. This was the first NASA vehicle to be launched from Vandenberg Air Force Base. It was also the first NASA vehicle to have solid propellant rockets strapped to its first stage for additional thrust-a technique that became standard with the Delta.
Almost concurrently with the Thor-Agena, NASA developed the Atlas-Agena, a much more powerful vehicle. The Atlas stage had been developed as an Intercontinental Ballistic Missile by General Dynamics/Convair for the Air Force. When mated with the Agena, the vehicle had the capability of placing spacecraft in lunar or interplanetary trajectories. The first operational launch was on Jan. 30, 1964, a Ranger mission to impact on the Moon and take photographs during the descent to the surface. The vehicle performed well, but the Ranger camera system failed. The second Ranger mission was a success, however, returning the first close-up photographs of the lunar surface. There were 19 Atlas-Agena missions in all-including four Rangers to the Moon; five Lunar Orbiters, the first spacecraft from the United States to enter orbit around and photograph another planetary body; the first Mariner spacecraft sent to Venus and Mars; the first three Applications Technology Satellites in Earth orbit; the first Orbiting Astronomical Observatory; and three Orbiting Geophysical Observatories. The last NASA Atlas-Agena was launched on March 4, 1968.
The Air Force also continued work on vehicle development programs, though its aims and capabilities were dissimilar to those of NASA. One early project was a hydrogen-burning stage called Centaur, that would be far more powerful than those using less-volatile kerosene fuel. The service had inherited this project as an engine development effort from the National Advisory Committee for Aeronautics, NASA's predecessor. On July 1, 1959, the Air Force transferred the Centaur development program to NASA, in effect returning it to its originators. Marshall Space Flight Center was assigned management responsibility initially. The project was transferred to the Lewis Research Center between the first and second launches.
The Atlas-Centaur required new facilities, and Launch Complex 36, with two pads, was built on Cape Canaveral to accommodate it. The Atlas-Centaur development program was one of the most difficult in NASA history. The first launch on May 8, 1962, was a failure. Three of the next five missions, although trouble-plagued, were successes; one was a failure and the other only partially successful. After each flight, the analysis of system failures contributed to an overall understanding of vehicle performance. Engineering modifications and changes in the checkout procedures were instituted to correct the problems. Atlas-Centaur 8 failed in that the Centaur engines did not ignite for a second burn after a coast period of several minutes in low Earth orbit. But the design engineers and launch operations people were so certain they could correct the problem that the next flight, Atlas-Centaur 10, was scheduled for an important mission. On May 30, 1966, this vehicle carried the first Surveyor spacecraft to a spectacular accomplishment-the first soft landing of an American spacecraft on the Moon.
After that tremendous success, the overall record of the new vehicle became quite good. Atlas-Centaur achieved seven successful launches of the Surveyors, of which five went on to land safely on the Moon. The vehicle sent Mariner spacecraft to Venus, Mercury and Mars and Pioneer spacecraft to Jupiter and Saturn-feats that incredibly enriched our understanding of the solar system. It also carried into Earth orbit many spacecraft too heavy for the other available vehicles. These included the very heavy Orbiting Astronomical Observatories, INTELSAT communications satellites, the larger Applications Technology Satellites, and many more.
In 1970, NASA planners foresaw a need for an unmanned launch vehicle with greater capability than the Delta or Atlas-Centaur. Several planned future missions-specifically the Viking automated laboratories to explore the Martian surface and atmosphere and two Voyagers to observe Jupiter and Saturn-would require a spacecraft too heavy for the Delta or Atlas-Centaur to carry. At that time the Space Shuttle was only a future possibility for NASA. After studying various alternatives, NASA decided that the fastest, most economical way to obtain the new heavy lift vehicle required would be to combine two existing systems. The planners decided to replace the small third stage on the Air Force's Titan IlIC vehicle with the far more powerful Centaur upper stage of the Atlas-Centaur. The new combination, called the Titan-Centaur, became the most powerful vehicle available in the United States' unmanned space program at that time.
In addition to the Vikings and Voyagers, Titan-Centaur launched two Helios spacecraft to study the Sun.
The launches of the two Voyager spacecraft to the outer planets in 1977 were the final assignments for Titan-Centaurs. Launch Complex 41, an Air Force launch site on Cape Canaveral, was borrowed by NASA from 1974 until 1977 for Titan-Centaur launches. Martin Marietta was the contractor for the Titan stages, and General Dynamics/Convair for the Centaur. Complex 41 later became the launch site for the most powerful unmanned U.S. rocket, the Titan IV, developed by Martin Marietta for the Air Force.
Launch of an expendable rocket did not carry the risk associated with placing humans in orbit. Nevertheless, it was a complex task involving many people and costing tens of millions of dollars. Integration and checkout of a NASA unmanned booster and its payload required that launch directorate personnel work closely with the spacecraft designers to prepare needed ground support systems. This was a process that began not just days or months, but sometimes years ahead of the actual launch date.
The careful advance preparation continued when the spacecraft was delivered to Cape Canaveral well ahead of the scheduled liftoff date, so it could be assembled and checked out by the manufacturer or owner. On scientific spacecraft, the scientists responsible for individual instruments and experiments often participated in the spacecraft checkout and launch activities.
Expendable launch operations personnel also had to determine radar and photographic requirements. Support provided by the Eastern and Western Test Ranges, operated by the U.S. Air Force, had to be coordinated.
Companies which built the launch vehicles were an integral part of the ELV launch team. The test conductor was always a contractor employee, while the launch director was always from NASA. Both NASA and contractor engineering staffs were on hand to resolve technical problems. Overall direction came from the launch director, while the test conductor manned the key console and provided detailed instructions to the launch team.
The ELV program also involved a close working relationship between KSC and other NASA centers. One of these was the Goddard Space Flight Center in Greenbelt, Md., which oversaw the design and development of the Delta for NASA. Goddard also supervised the design and packaging of many scientific and technological satellites, and continues to perform this activity today. The Lewis Research Center in Cleveland oversaw the design and development of the Atlas family of vehicles, as well as the Centaur upper stage.
The Jet Propulsion Laboratory at Pasadena, Calif., was and still is the control facility for the Deep Space Network used for tracking planetary exploration spacecraft such as the Voyagers, Vikings and Mariners. The Ames Research Center near San Francisco also plays an important part in planetary investigations. This has included designing and developing the Pioneer spacecraft which returned the first detailed information on the planets Venus, Jupiter and Saturn. The Langley Research Center in Hampton, Va., managed the design and construction of the Viking Landers, two spacecraft that sent back fantastic imagery and physical measurements from the surface of Mars, and the Lunar Orbiters whose photography of the Moon paved the way for the Apollo astronauts.
The original experienced launch team KSC inherited from Vanguard has gained and lost personnel through the years, and policy surrounding NASA expendable launch vehicle operations has undergone periodic redefinition. One noticeable change that occurred was the emergence of a large number of "reimbursable" launches-those undertaken for commercial customers, foreign governments or agencies, and other branches of the U.S. government.
These "customers" would usually buy or supply their own spacecraft, purchase the launch vehicle and service from NASA, and pay the expenses associated with launching their payload. Although NASA did not make a profit from these services, the numbers involved enabled the production lines to operate at a relatively high rate, resulting in mass production efficiencies which reduced the cost of each stage. Those which NASA purchased for its own use were then less expensive than they would be otherwise.
The owner assumed control of the satellite after it was placed in orbit. Most of the reimbursable missions involved applications satellites, derived from initial scientific investigations performed by NASA. The percentage of such missions peaked in 1980, when 83 percent of the spacecraft were launched on a reimbursable basis.
Manned space flight also had an impact on the unmanned payloads field. In January 1979, following a management reorganization, the unmanned launch operations directorate was assigned the added responsibility of processing payloads for the then nascent Space Shuttle program.
With the advent of the manned Shuttle, NASA envisioned that reliance on expendable launch vehicles would decline. The Shuttle concept offered a unique advantage over expendable launch vehicles-it was a reusable resource.
The first Shuttle flight occurred in April 1981. As one successful mission followed another, NASA implemented a corresponding phase-down in ELV launches.
An alternate role for expendable launch vehicles began to emerge in the early 1980s. In January 1983, the administration of President Ronald Reagan announced that the federal government would encourage private industry to build and operate ELVs to deliver commercial payloads -- such as communications satellites -- into orbit. Government facilities, including those at Cape Canaveral and Vandenberg, would be made available to private industry on a cost-reimbursable basis. The military also now regarded unmanned rockets as an invaluable backup to the Shuttle.
Further influencing government policy regarding ELVs was the Space Shuttle Challenger accident in January 1986. Six months later, NASA gave the commercial ELV industry a needed boost when the agency announced that commercial payloads such as communications satellites would no longer be deployed from the Shuttle.
About the same time, the Air Force announced plans to use unmanned vehicles for many of its future payloads. These decisions opened the door wide, not only to the big three American ELV manufacturers -- General Dynamics, builder of the Atlas family of vehicles, Martin Marietta, the Titan, and McDonnell Douglas, the Delta -- but other entrepreneurial firms eager to join the space race. These firms would not have to compete against the Space Shuttle to launch commercial payloads, although formidable competition was emerging from established or developing foreign firms and agencies.
NASA reassessed its own policy toward unmanned expendable rockets. The agency concluded that a mixed fleet of launch vehicles, rather than reliance on a single system, the Shuttle, was the best approach. In 1987, NASA announced its mixed fleet plan. "Expendable vehicles will help assure access to space, add flexibility to the space program, and free the Shuttle for manned scientific, Shuttle-unique, and important national security missions," NASA Administrator Dr. James Fletcher said.
The plan calls for procuring ELV launch service competitively whenever possible, except for a transitional first phase covering launches through 1991. During this transition, NASA will procure ELV launch service non-competitively to address a backlog of space science missions. The agency's role will be the same throughout both phases, however: oversight of vehicle manufacture, preparation and launch, rather than direct management of it, as was the case in the past.
Under the transitional first phase, NASA will go through either the Air Force or directly to the vehicle manufacturer to obtain the best match between vehicle and payload for the time frame in which each will be needed. Vehicles being produced under contracts for the military, such as the Titan 11, Titan IV, Delta 11, Atlas E and Atlas 11, will be procured via the Air Force.
If NASA should buy a commercially available vehicle, the manufacturer will provide not only the vehicle, but launch service as well. Failing into this category are several variants of expendable rockets originally designed and built for either the Air Force or NASA and now being marketed commercially, such as the Titan III and Atlas I (formerly the Atlas-Centaur). Some vehicle types, like the Delta II and Atlas II, are simultaneously being produced under contracts to the Air Force and offered for commercial launch service.
An already strong legacy will gain greater luster in the next decade when a number of exciting and vital space science missions are launched for NASA on ELVs as part of the first phase. A Delta 11 booster procured through the Air Force will carry into space the Roentgen Satellite (ROSAT), an X-ray telescope that, once in orbit, will allow scientists to study such phenomena as the high X-ray luminosity of stars that otherwise appear to be identical to the sun. ROSAT will build on data provided by the High Energy Astronomy Observatory series, launched in the late 1970s on Atlas-Centaurs. West Germany is providing the telescope and the spacecraft, and the United Kingdom one of the focal instruments. The United States is procuring the launch vehicle and services, and also will provide one focal instrument.
NASA-sponsored exploration of the red planet, Mars, will resume with the Mars Observer mission in the early 1990s aboard a commercial Titan III rocket. Mars Observer will circle the planet for two years in a low, near-circular polar orbit, mapping the planetary surface as it changes with the seasons. Eight instruments will measure and investigate characteristics such as elements, minerals, cloud composition, and the nature of the Martian magnetic field.
Long-range plans call for NASA to purchase through the Air Force the most powerful American-made unmanned booster, the Titan IV-Centaur. The agency wants to launch two Titan IV-Centaurs in the mid-1990s as part of an ongoing program of solar system exploration.
The first planned mission is the Comet Rendezvous and Asteroid Flyby (CRAF), which will yield new insights into two types of smaller bodies in our solar system. En route to its rendezvous with the comet Knopff, CRAF will fly by the asteroid 449 Hamburga. It will take photographs. and scientific measurements of the asteroid, which is only 55 miles (88.5 kilometers) in diameter. Once at Knopff, CRAF will spend three years flying alongside the comet. Scientists will be able to study a body of what could be some of the original matter left behind when our solar system was formed nearly 5 billion years ago.
The second Titan IV-Centaur is slated for the Cassini mission, named after a French-Italian astronomer who discovered several of Saturn's moons. Cassini will carry out a four-year tour of Saturn and its moons. It will also send a probe through the dense atmosphere surrounding Titan, the largest of Saturn's satellites, to collect data and provide a preliminary map of its surface. Cassini will be a joint effort between NASA and the European Space Agency.
In the second phase of the mixed fleet plan, ELV launch manufacturers will compete to launch a class of payload in a particular weight category-small, medium, intermediate and large. As in the transitional phase, the winning company will provide complete launch service, from building the ELV to launching it.
General Dynamics was the first ELV builder to receive an order under the second phase. In October 1987, NASA announced that it had chosen the Atlas I over Martin Marietta's Titan booster to launch a series of meteorological satellites for the National Oceanic and Atmospheric Administration (NOAA). At least three of the Geostationary Operational Environmental Satellite (GOES) spacecraft will be launched on Atlas Is from Launch Complex 36, and the number could reach five. Also in 1987, General Dynamics completed negotiations with NASA for use of the Launch Complex 36 facilities for commercial launches.
The two NASA centers which oversaw design and development of the Delta and Atlas rockets will still be involved with ELVs, but with a different slant. Goddard Space Flight Center is managing both competitive and non-competitive procurement of ELV launch services in the small payload class, which includes the Scout (not launched from Cape Canaveral), and medium weight payload class, which includes the Delta. In July 1989, Goddard announced that Delta manufacturer McDonnell Douglas had been competitively selected to negotiate a contract for three firm and 12 optional missions in the latter category.
Lewis Research Center will manage competitive and non-competitive procurement of ELVs capable of launching intermediate class payloads, which includes the Atlas-Centaur, and large class, which includes the Titan IV.
At KSC and Cape Canaveral, control of civilian unmanned launches is gradually being shifted from NASA to the private sector. In October 1988, Martin Marietta and NASA announced an agreement under which Martin will use some KSC facilities to support its Titan III commercial launches. Earlier the same year, a 28-year era drew to a close when NASA launched a Delta rocket for the last time from Launch Complex 17 on the Cape. In November the following year, the last Delta in the NASA inventory was launched on the West Coast. It carried the Cosmic Background Explorer (COBE) into orbit from Vandenberg AFB. In July 1988, Launch Complex 17 was formally turned over to the Air Force, which will allow Delta manufacturer McDonnell Douglas to conduct commercial launches from the same complex.
The Air Force also will assume control of Launch Complex 36; one of the two pads will be used by General Dynamics for commercial launches. In September 1989, NASA conducted its final launch of an Atlas-Centaur from the Cape. Atlas-Centaur 68, carrying the last in a series of Fleet Satellite Communications (FItSatCom) spacecraft, was originally scheduled for launch in 1987. The mission was postponed when the Centaur's liquid hydrogen tank was accidentally punctured by a work platform.
KSC will continue to be involved with ELV launches of U.S. civil payloads, but, as is the case with Goddard and Lewis, the slant will be different. In early 1989, KSC was assigned oversight responsibility for all unmanned launches carrying NASA payloads from both the Cape and Vandenberg AFB.
The NASA unmanned launch operations team has completed many historic launches. These space flights far exceed in number the manned missions of the Mercury, Gemini, Apollo, Skylab and Apollo-Soyuz programs. More than 300 unmanned launches were conducted from 1958 through the end of 1989, with a high rate of success. Unmanned spacecraft have returned volumes of scientific data, much of it not otherwise obtainable. The effects on scientific knowledge as a whole are incalculable. Technological benefits from applications spacecraft have provided better weather forecasting, accurate storm tracking, a superb international communications system that permits live television coverage from almost anywhere in the world, highly accurate navigation for ships and planes, and inventories of the resources of land and ocean. The unmanned space program already has more than repaid the nation's investment in time, money and technical talent as it enters its fourth decade.
The following table provides launch dates and brief descriptions of some of the more significant missions.
|Chapter 3 | Table of Contents| |
Help Students Get Organized as School Year Begins
Too many high school students wait until well into the school year to get themselves organized to achieve to their best academic potential. It helps when the teacher is proactive and gives students tools to get the new year off to a strong start.
Ask Students to Identify Goals
Ask students to write one paragraph about their long-term career goal. If they are unsure about their career goal they may write paragraphs about several different job choices. Then ask students to make a list of ten things they will do this school year to start to prepare for their future career.
Ask student volunteers to share their lists with the class. Give feedback and remind students that each year in high school is crucial to building a strong overall G.P.A. required for college admissions and jobs.
Having students write down their goals and the specific steps they should undertake during the year to start to achieve their goals helps get them focused and goal oriented. Students who have written down their goals tend to take those goals more seriously.
Give Students an Outline of the Work to Be Covered in the Semester
Help students gain control of their time by giving them an outline for the semester’s work. List all due dates for major assignments and tests on the outline. Give students a chance to plan ahead to ensure that they get all work done by the due dates. Students like structure and they like to know when they will be busy with major work or study for a certain class. An outline of the semester helps students (and the teacher too!) stay focused and on track.
Provide Tips for Note Taking
Give students information about how to take notes effectively. Tell them to make copying any information on the black board into their notebooks a priority when entering the classroom. Tell them to be prepared to start taking notes as soon as a class begins. Ask them to add information about any key topics discussed in their notes and to write down any key facts mentioned by the teacher.
It is crucial to be proactive in taking notes. If class is in session, students must always have a notebook open and be ready to write down important facts. Remind students that if a fact has been on the blackboard or discussed in class it is fair game to appear on a test.
Discuss "Multi-tasking" and "Strategies for Success"
As students proceed through high school one of the skills they must build is that of multi-tasking. As class work gets more complex and requires more homework time, students must learn to juggle multiple assignments and stay on top of a variety of assignment due dates.
Teach students to keep a calendar or notebook listing all due dates for assignments. Also tell them to break down large assignments into manageable chunks by working on them each night rather than waiting until right before the due date and then panicking.
Also, talk to students about approaching school work with a positive attitude. Ignore negative thoughts and focus on achievements they have made and goals they hae met. Give tips about how athletes use visualization and positive thinking to picture themselves doing well and then using that positive energy to focus on the task ahead and excel.
Here's a great resource I would recommend to hand out to your students:
20 Ideas to Help Students Get Organized: http://www.getorganizednow.com/art-students.html |
|SPACE TODAY ONLINE COVERING SPACE FROM EARTH TO THE EDGE OF THE UNIVERSE|
|COVER||SOLAR SYSTEM||DEEP SPACE||SHUTTLES||STATIONS||ASTRONAUTS||SATELLITES||ROCKETS||HISTORY||GLOBAL LINKS||SEARCH|
Halloween Storm Surge Shocks Earth
Earthlings were fortunate in 2003. Our planet's strong magnetic field shielded us from the full brunt of what scientists know as the Halloween Storms on the Sun. At the end of the year, seven major solar outbursts jolted Earth's upper atmosphere, setting records for extreme space weather.
Here's why things could have been much worse. The space-weather records set in the fall of 2003 included:
The record solar storms that erupted in late October and early November, 2003, particularly the largest X-ray flare ever recorded, threatened power systems on the planet's surface and communications and weather satellites in orbit above Earth.
- The largest X-ray solar flare ever recorded
- The fastest-moving solar storm ever. It splashed over us at nearly 6 million mph.
- The hottest storm ever. It was tens of millions of degrees as it doused Earth.
In fact, the solar blasts caused power outages in Sweden, disturbed airplane routes around the world, and damaged 28 satellites, ending the service life of two.
Earth Satellites. Satellites are part of daily life. They are used for communications, weather forecasting, navigation, observing land, sea and air, other scientific research, and military reconnaissance. Men and women live and work aboard manned satellites – space shuttles and space stations.
Hundreds of satellites work for us in orbit above our planet, including 150 commercial communications satellites. Large doses of electrically charged solar particles can shock those satellites and even kill them.
Fortunately, power companies and satellite operators have begun to pay attention to space weather forecasts in recent years. Hearing warnings this time, they were able to protect their more vulnerable systems. The Halloween Storms of 2003 did not set a record for satellite damage.
Beyond Earth. The waves of solar energy blasted away from the Sun by the flares didn't stop at Earth. They went beyond to burn out the radiation monitor aboard the Global Surveyor spacecraft orbiting Mars. That instrument had been tracking the radiation future explorers might encounter on trips to the Red Planet. And beyond Mars near the planet Saturn, the Cassini spacecraft measured the intense energy from the Sun.
Months later, the energy from the storm reached beyond Pluto's orbit to the edge of the Solar System, washing over the Voyager spacecraft.
Solar Cycles. Scientists found the Halloween Storms intriguing because they came some three years after the peak of the most recent sunspot cycle.
Sunspot activity varies on the face of the Sun in eleven-year cycles. As the number of sunspots increases, so does the solar storm activity, which can sometimes leave the Earth's magnetic field shaking as though a giant hurricane were approaching.
A flare is a brilliant outbreak in the Sun's upper atmosphere at or near a sunspot. It is an explosive release of large amounts of electromagnetic radiation and huge quantities of charged particles.
Sun and Sunspots Index Page Learn More about Our Sun and Sunspots and Their Effects on Earth
Read more Space Today Online stories about the Solar System Star: The Sun Inner Planets: Mercury Venus Earth Mars Outer Planets: Jupiter Saturn Uranus Neptune Pluto Other Bodies: Moons Asteroids Comets Beyond: Pioneers Voyagers
Top of this page Solar System index Space Today Online cover
Copyright 2004 Space Today Online E-Mail |
Reasonable rules against harassment on the basis of sexual orientation do not violate the free speech rights of students and teachers. Here are some general guidelines for striking a balance between protecting free speech and protecting students against harassment.
Conduct by Students
Students do not abandon their rights to freedom of speech at the classroom door. A school should allow students to speak freely so long as their speech does not substantially interfere with the educational process or the rights of other students. However, certain conduct may be prohibited in a school setting even where it could not be punished in other settings.
For example, it does not violate freedom of speech when teachers require that students stop extraneous conversations and pay attention in class; this form of student speech interferes with the educational process. In the same way, harassment of students by other students interferes with the operation of the school and infringes on the rights of the harassed students to enjoy equal treatment under law and equal educational opportunity. Schools have a compelling interest in eliminating discrimination and harassment, and this goal may be pursued in a manner that does not abridge the right to freedom of expression.
Anti-harassment rules must be carefully written so that they do not punish speech, opinions, or beliefs in and of themselves, but instead punish impermissible conduct - conduct that targets a person for assault, threat, or vandalism on the basis of the victim's (actual or perceived) race, religion, national origin, disability, gender, or sexual orientation. They may also forbid harassing conduct, whether or not targeted to a particular person, that is so pervasive or intense as to create a hostile environment which hinders the ability of a person to get an education.
Well-written anti-harassment rules do not require anyone to change their beliefs about homosexuality, whatever those beliefs may be. However, the rules do require students, faculty, and staff to adhere to appropriate standards of conduct. Assaults, threats, vandalism, or use of derogatory epithets violate this minimum level of conduct. Expression of a person's deeply held beliefs does not.
An anti-harassment rule that is written too broadly can infringe on free speech. For example, a rule forbidding any statements that are critical of homosexuals or homosexuality is improper because it only allows a single government-approved viewpoint to be expressed. A rule forbidding any statement that offends another person is vague and unfair because the speaker could never know in advance when another person might be offended. However, schools may properly establish rules against deliberate statements, gestures, or physical contact which are intended to harass or interfere with another student's school performance or create an intimidating environment.
Conduct by Teachers
Like students, teachers have the right to form and express their own opinions. Nonetheless, a school district (like any employer) may legitimately expect its employees to fulfill their job responsibilities while they are at work. It is reasonable for a school district to include in its job descriptions a requirement that teachers treat all students with respect and without regard to sexual orientation. It does not violate teachers' free speech rights for the school to require them to abide by and enforce school rules of conduct, including rules against harassment.
For example, a teacher who tells anti-gay jokes in class is contributing to a hostile environment. A school could legitimately discipline this teacher for failure to perform job duties and for engaging in harassment or contributing to a discriminatory educational environment. On the other hand, a teacher could have a valid pedagogical purpose to discuss sexual orientation as part of the district-approved curriculum. It is not necessary for a teacher to avoid mention of homosexuality if the topic arises during class discussion.
Schools not only must have rules against harassment on paper; they also must be committed to enforcing their rules in practice. Even the best-written anti-harassment rules will not help a school unless the educational staff clearly understands them and is committed to enforcing them in away that both upholds student free speech rights and protects students from harassment. |
People are often quick to link artificial intelligence with the future of every industry including technology, medicine, and science. For most scientists, there is a common belief that the answer lies in data mining through the information we have already generated online. Whereas humans cannot analyze large amounts of data, AI can produce fast, accurate details every hour based on a pre-formatted dataset algorithm. Some environmental agencies have begun projects to gather data with the express purpose of finding trends that help them make predictions specifically about the health of the planet.
1. Autonomous Electric Vehicles
Autonomous vehicles are now standard in some places around the world because companies want to explore transportation service expansion. With being electric and driverless, it would save a lot of gas while also lessening driver death rates because of carelessness. More importantly, drivers will not have to rely on big oil to travel which will reduce the amount that we currently depend on. Climate change is significantly affected by pollution and its society’s dependency that contributes daily to the high levels. Electric vehicles of any form would cause a disruption to energy which is why they fight against it now. As AI becomes more popular, people will begin to invest in electric vehicles to save time and money.
2. Climate Change and Weather Pattern Prediction
Climate informatics uses AI to collect data from environmental sensors, weather satellites, and climate models. Information like the ocean or seismic measurements can be used to change how weather forecasters predict changing patterns. The Weather Channel estimates that about 70 people die annually in the U.S. from tornadoes and several hundred from flooding. Two out of three deaths occurred because people did not receive a warning or found themselves sitting in traffic for hours trying to get out before the storms hit. As more data scientists enter the fields like bioinformatics or computer science, the knowledge will grow regarding how machine learning will assist the scientific community in interpreting, analyzing, and predicting climate change and weather patterns.
Energy is all about supply and demand, and data could significantly affect pricing, trading, storage, and marketing. Artificial intelligence could dramatically change renewable energy production and distribution because of smart data. AI also can observe energy production from the oil or gas sources, processing, and distribution. Data collection can further analyze supply and demand as to where and how best to use the resources.
Artificial intelligence could revolutionize the agricultural industry which currently has a value of more than 330 billion annually in the U.S. issues that plague this industry include climate change, food security, and population expansion. Data continues to point to concerns over robotic abilities to make decisions or correct problems. Their skills, however, can predict water necessity, nutritional value, supply and demand, pesticide levels, and water usage. The ability to use robots to sample and test for diseases would also significantly reduce contamination from fertilizer or contaminated water supply.
5. Emergency Response and Management
In the last year, wildfires and hurricanes devastated millions of lives in Puerto Rico, Texas, and the East and West coasts. Over the last decade, however, there have been 242 natural disasters which have cost the government over 350 billion in emergency response. AI has the capabilities to analyze data in real-time which has access to satellites and social media trends which would significantly increase the chance of successfully predicting and providing the urgent warnings that save lives. Disaster simulations can also help people learn how to appropriately respond to emergencies which would lessen the death rate.
During emergencies, struggling students rarely have time to do anything else but evacuate. If you are one of those students who find themselves waiting until the last minute to google ‘write my paper’, then, you probably should know about our custom writing service that can process your order and provide professionally written essay samples within an hour.
Today, AI can help to tackle global issues around climate change, air pollution and other challenges. Its ability to absorb and analyze big amount of data can help to save the planet and make our living better. |
Outline of Egyptian History:
Ancient Egyptian civilization was certainly one of the most long-lived and durable in all of world history. Among the factors contributing to its longevity are the Nile River, its naturally protected valley, and the stable weather conditions. The Nile valley is enclosed by the Mediterranean Sea on the north; the Arabian Desert and Red Sea on the east; the Libyan Desert on the west, and in ancient times danger seldom came from the south. By the Neolithic Period (ca. 5000 B.C.), the Egyptians already enjoyed a sedentary and stable existence. The annual inundation of the Nile induced them to construct dykes and dams to protect their settlements, and to dig canals to better irrigate and cultivate their fields. They began to store harvest crops against times of famine, and they learned how to gauge the rise and fall of the inundation waters. One might even say that the regular annual rhythm of the river was the primary catalyst underlying the organization and political unification of the country! In this sense, then, Herodotus, the "father of history*, was surely correct when he wrote in 449 B.C. that “Egypt is the gift of the Nile".
Neolithic Period (5000 B.C.):
Egyptian civilization at this period is known as the "Nagada culture", which can be divided into three phases. The culture first arises in the Fifth Millennium B.C. in Upper Egypt between Abydos in the north and Armant in the south, and subsequently spread over the rest of Upper Egypt. The first - or Nagada I - phase achieved trade relations with the Kharga Oasis, reached the Red Sea to the east, and the First Cataract to the south. The process of consolidating the country, which resulted in historical times in a unified Egypt, may have begun under the Nagada II phase. Both trade relations and con¬flicts between Upper and Lower Egypt are attested at this time. Especially noteworthy during this period are the fascinating early mural paintings discovered in a tomb at Hierakonpolis (ca. 3500 B.C.), and the ceramic decorations displaying human and animal figures, as well as ships complete with oars and cabins. The third and most advanced Nagada III phase seems to reveal influence both from Lower Egypt and other cultures in the Near East. Autonomous provinces were estab¬lished and consolidated until two separate kingdoms eventually came into being: one in Upper Egypt with its capital at Nekheb (El Kab, near Edfu); and the other in Lower Egypt, with its capital at Buto (Tell el Faram, near Desouq).
The Historical Period (ca. 3000-332):
Was divided into thirty-one dynasties, or royal families, by the Egyptian priest Manetho, who lived between 323 and 245 B.C. Manetho, wrote his history of Egypt beginning with Menes of the First Dynasty and ending with Alexander the Great in 332 B.C. We can divide his dynasties further into several discrete eras.
The Early Dynastic Era (ca. 3000-2750):
Consists of the first two dynasties, and derives its name from the town of origin of the earliest kings: Thinis. The first capital of the newly unified country to be established -by Hor-Aha (Menes), the fourth king of the First Dynasty - was at Memphis. Hiero¬glyphic writing also came into use at this time in moderate scale for simple economic and other types of documents. These early jottings mostly served to list names, places or objects. A few experiments with stone as a building material, instead of mud brick, were also undertaken. Royal tombs were constructed at both Sakkara and Abydos. Among the famous
Representational works from this period is the Narmer palette, which commemorates the defeat of the Lower Egyptians at the hands of the Upper Egyptians, and the unification of the two halves of the country.
The Old Kingdom (ca. 2705-2155 B.C.):
This period includes Dynasties 3—6. Memphis remained the political capital, but Heliopolis grew as the most important religious center. The pharaohs were buried in theGreat Pyramid necropolis of Sakkara, Giza, |Abusir and Dahshur (to the southwest of Cairo). The Old Kingdom was characterized by a highly bureaucratic and organized central administration. In the transition period from the Fifth to Sixth Dynasties, the corpus of religious mortuary literature known as the Pyramid Texts makes its first appearance insidethe burial chambers of the pyramids. Members of the royal family and high officials were interred in mastabas, or inrock-cut tombs. The officials’ sepulchers were located either around the pyramids of the pharaoh they had served, or in their own administrative province. The walls were richly decorated with painted reliefs of scenes of deity life and religious mortuary cult activities. The most famous kings of this era include Djoser Netjer-Khet) of Dynasty 3,owner of the Step Pyramid at Sakkara, which was constructed by the great architect Imhotep; King Sneferu of the Fourth Dynasty built one pyramid at Meidum and two at Dahshur.
His successors Khufu, Khafra, and Menkaura constructed theirs at Giza; these last three are considered one of the seven wonders of the ancient world. In the Fifth Dynasty, the cult of the falcon-headed sun god Re exerted tremendous influence over the country. Sun-temples were erected near the pyra¬mids north of Sakkara and at Abusir.
The First Intermediate period (ca.2155-3134):
Towards the end of the Old Kingdom, as central authority disintegrated, whatcontacts had existed between Egypt and Nubia, Phoenicia and Palestine were broken off. The officials in charge of the many Egyptian provinces struggled to gain their own independ¬ence, and political and economic chaos resulted. The period from Dynasties 7 to 10, also known as the Heracleopolitan Period, was one of civil war and starvation. Two weak ruling houses are attested:|one at Thebes in the south, and the other at Heracleopolis in the north (Ehnasia near the Fayum). This was the classical period of the Egyptian lan¬guage, and several descriptive accounts tell of the woes of the age, which lasted more than a century and a half.
The Middle Kingdom (ca.2134-1781 B.C.):
Dynasties 11-12 come under this heading. The country was finally reunited under the Theban princes whose capital in the south became the religious center for all of Egypt. It was here at Thebes that King Mentuhotep II built his famous mortuary temple of Deir el-Bahari. In the Twelfth Dynasty| however, the capital shifted to the north, near El-Lisht, and the pharaohs were buried in mud-brick pyramids (Dahshur, Fayum, and Beni Suef). The older Pyramid Texts evolved into the Coffin Texts, now no longer restricted to use by the king alone. They adorned the inside and outside of coffins, and are later attested in the tombs of certain high officials.
Provincial "monarchs" and other Independent high officials were allowed to excavate or construct their tombs in their own districts. These were provided with beautiful mortu¬ary equipment and decorated with vivid scenes of both daily life and life in the next world (Beni Hassan, El Bersheh, Thebes, and Aswan).
Great irrigation projects were undertaken during the Twelfth Dynasty. Attempts were made to irrigate the Fayum, and reservoirs and canals were constructed under Sesostris (Senusret) II, Sesostris (Senusret) III and Amenemhat III.
The second Intermediate Period (ca. 1781-1550 B. C.) (Dynasties 13-17):
After a period of political and economic turmoil, most of the country was overrun for about a century by a Near Eastern people known as the Hyksos, or "rulers of foreign Lands" Dynasties 15-16). Composed of immigrant tribes of Syrians, Palestinians and Hurrians, the Hyksos found refuge in the fertile Nile valley. They introduced into Egypt the horse and horse-drawn chariot, as well as new types of daggers, swords and compos¬ite bows, all of which were to play a large role later on in Egyptian military history. In terms of artistic achievement or economic prosperity, the Hyksos domination was a rela¬tively decadent and impoverished era.
The Hyksos worshipped the deity Seth (Sutekh) god of strength and confusion. Avaris in the eastern Delta between Tanis and Qantir served as their capital. During the Seven¬th Dynasty, however, the Theban princes had been consolidating their own power in the south, and eventually moved to oust the foreigners from their homeland. Finally, under the leadership of Seqenenre, Kamose and Ahmose, the Thebans expelled the Hyksos, reunited the country and initiated a new dynasty.
The New kingdom (ca. 1550-1070 B.C.)
(The Empire period):
This period includes Dynasties 18—20, and is considered by many to be the golden age of Egyptian civilization. In the Eighteenth Dynasty, Thebes was both the political and religious center of the realm. Magnificent temples were erected there for the state god Amon-Re. The temple of Karnak functioned not only as the major religious center, but also the political, economic and diplomatic focus for everything, from the delivery of local taxes from across the river to foreign tribute from provinces such as Nubia, Syria-Palestine and Phoenicia, and from countries such as Punt (Somalia?), Libya, Crete, the Aegean islands and Mesopotamia. Famous rulers of Dynasty 18 include: Queen Hatshepsut (1488—1470 B.C.), the best-known queen-cum-pharaoh of Egypt. Her relatively peaceful reign, trade relations with Punt and building activities at Thebes (Deir el-Bahari and Karnak} are especially noteworthy.
Tuthmosis III (1490—1436 B.C.), whose military exploits in the north, northeast and south earned him the title of creator of the Egyptian empire. He also conducted an active building campaign, especially at Thebes (Karnak, Luxor).
Amenophis III (1403-1365) B.C.), with his prosperous and peaceful reign, and friendly diplomatic relations with many foreign countries in western Asia. Egyptian art and cul¬ture reached a zenith during his rule.
Amenophis IV (Akhenaten) (1365-1348 B.C.), the first to establish a form of monothe¬ism in Egypt. Akhenaten's great religious revolution involved the replacement of the state god Amon-Re with the solar deity Aten. Artistic conventions and political traditions were also totally restructured. The king moved the capital to a completely new city in Middle Egypt (Akhetaten, now Tell el-Amarna). Many of the Egyptian holdings in Syria-Palestine which Tuthmosis III had secured were nearly lost under Akhenaten's reign. Tutankhamen (1347—1337 B.C.), a successor of Akhenaten, restored the cult of Amon-Re, and abandoned Tell el-Amarna in order to return to tradition. The discovery of his nearly intact tomb in 1922 revealed the wealth and prosperity of the Eighteenth Dynasty.
Horemheb (1332-1305 B.C) who served as generalissimo and then king after the death of Tutankhamen, and protected the country from foreign intruders.
In Dynasty 19 (ca. 1305-1305 B.C., The capital was moved once again, this time to Pi-Ramesses in the eastern Delta, the origin of the Ramesside family and a more strategic location Vis a Vis Syro-Palestinian affairs. The Hittites in Asia Minor were Egypt's chief rival at this period; both sides struggled for control of the Syro-Palestinian region (Battle of kadesh).
In the reign of Ramesses III ca. 1196 B.C.), Aegean tribes known as the Sea Peoples threatened to infiltrate the Egyptian Delta region. Economic and cultural decline, coupled with the threat of foreign invasion, contributed to the weakening of central authority; strikes and cases of corruption are documented in the ancient sources. At Thebes, the priesthood of Amon achieved ever greater political influence.
The Third Intermediate Period (1070-750 B.C.):
Dynasties 21-24 are generally ascribed to this era. In Dynasty 21, Egypt was divided once again into two regions. In the south, the theocratic state was ruled by the priest¬hood of Amon-Re at Karnak, while the north was controlled by the priests of Tanis. The Twenty-second to Twenty-fourth Dynasties were of Libyan origin.
The Late period (750-332 B.C.):
The ruling house of Nubia succeeded in founding the Twenty-fifth Dynasty. Egypt was reunited under King Shabaka, and the capital was moved to Napata near the Fourth Cataract in the Sudan. At the end of this period, the Assyrians conquered Egypt (671
The Twenty-sixth, or Saite, Dynasty achieved a renaissance of Egyptian civilization. Art, language and many other aspects of traditional Egyptian culture were resurrected from bygone classical ages. The dynastic capital was at Sais in the western Delta, until the Per¬sians under Cambyses conquered Egypt in 525 B.C.
During Dynasties 27—30, Egypt remained under Persian rule, occasionally succeeding in lacing native Egyptian rulers on the throne.
Graeco-Roman Period (332 B.C.-A.D. 395):
In 332 B.C. the country was again invaded, this time by Alexander the Great, founded the city of Alexandria in the following year. After his death in 323 B.C., Egypt fell under Ptolemaic rule until the death of Antony and Cleopatra VII in 30 B.C. The country then became a Roman province until A.D. 395. Christianity then arose and Al¬exandria became a theological center of the new religion.
The Byzantine Period began in A.D. 395 in the time of Arcadius, the Emperor of the East.
In the year A.D. 640, Amr Ibn el-As, the Muslim general of Caliph Omer Ibn el-Khattab, conquered Pelusium (near Suez) and defeated the Byzantines at Heliopolis. His conquest was completed in 646 with the taking of Alexandria, and Egypt then became an Islamic province.
The Arab conquest of 641 by the military commander Amr ibn al As was perhaps the next most important event in Egyptian history because it resulted in the Islamization and Arabization of the country which endure to this day. Even those who clung to the Coptic religion a substantial minority of the population in 1990 were Arabized; that is they adopted the Arabic language and were assimilated into Arab culture.
Although Egypt was formally under Arab rule beginning in the ninth century hereditary autonomous dynasties arose that allowed local rulers to maintain a great deal of control over the country's destiny. During this period Cairo was established as the capital of the country and became a center of religion learning art and architecture. In 1260 the Egyptian ruler Qutuz and his forces stopped the Mongol advance across the Arab world at the battle of Ayn Jalut in Palestine. Because of this victory Islamic civilization could continue to flourish when Baghdad the capital of the Abbasid caliphate fell to the Mongols. Qutuz's successor Baybars I inaugurated the reign of the Mamluks a dynasty of slave-soldiers of Turkish and Circassian origin that lasted for almost three centuries.
In 1517 Egypt was conquered by Sultan Selim I and absorbed into the Ottoman Empire. Since the Turks were Muslims however and the sultans regarded themselves as the preservers of Sunni Islam this period saw institutional continuity particularly in religion education and the religious law courts. In addition after only a century of Ottoman rule the Mamluk system reasserted itself and Ottoman governors became at times virtual prisoners in the Citadel the ancient seat of Egypt's rulers.
The modern history of Egypt is marked by Egyptian attempts to achieve political independence first from the Ottoman Empire and then from the British. In the first half of the nineteenth century Muhammad Ali an Albanian and the Ottoman viceroy in Egypt attempted to create an Egyptian empire that extended to Syria and to remove Egypt from Turkish control. Ultimately he was unsuccessful and true independence from foreign powers would not be achieved until midway through the next century.
Foreign including British investment in Egypt and Britain's need to maintain control over the Suez Canal resulted in the British occupation of Egypt in 1882. Although Egypt was granted independence in 1922, British troops were allowed to remain in the country to safeguard the Suez Canal. In 1952 the Free Officers led by Lieutenant Colonel Gamal Abdul Nasser took control of the government and removed King Faruk from power. In 1956 Nasser as Egyptian president announced the nationalization of the Suez Canal an action that resulted in the tripartite invasion by Britain France and Israel. Ultimately however Egypt prevailed and the last British troops were withdrawn from the country by the end of the year.
No history of Egypt would be complete without mentioning the Arab-Israeli conflict which has cost Egypt so much in lives territory and property. Armed conflict between Egypt and Israel ended in 1979 when the two countries signed the Camp David Accords. The accords however constituted a separate peace between Egypt and Israel and did not lead to a comprehensive settlement that would have satisfied Palestinian demands for a homeland or brought about peace between Israel and its Arab neighbors. Thus Egypt remained embroiled in the conflict on the diplomatic level and continued to press for an international conference to achieve a comprehensive agreement.
Mubarak is the current president of Egypt. He served actively in the army. He was the chief-commander of the air force during the1973 war (also called Yom Kippur War). Actually, the successful performance of the air force in that war is accredited to him.
He was promoted as an Air Marshal in 1974. In 1975, President Sadat chose him as his vice-president and he remained as such until Sadats assassination in 1981. He was also made secretary-general of Sadat's National Democratic Party.
Mubarak was elected president on 13 October 1981. He soon declared his commitment to Sadat's peace path. He also released the political detainees who were imprisoned by Sadat.
In the early years of his rule, Mubarak worked hard to restore severed relations with Arab states and maintain good relations with the United States and the Soviet Union, later Russia.
Domestically, he introduced economic reforms and granted more political and press freedoms to the society. In recent years he also encouraged a privatization scheme planned by the government to reactivate the economy.
Since the beginning of the 1990s, Mubarak was challenged by terrorist attacks launched by fundamentalist groups.
In 1995, Mubarak escaped an assassination attempt in Addis Ababa in Ethiopia, while he was attending an African meeting. In the aftermath of the attack, Mubarak adopted a hard-line position against extremists until he successfully uprooted terrorism.
He also supported and took part in the US-led Gulf war in 1990 against Iraq, which was reaped by the successful liberation of Kuwait.
Also under his rule, Egypt supported and sponsored peace talks between Palestinians and Israelis.
Mubarak also showed moral support for the US anti-terrorism efforts following the terrorist attacks on New York and Washington on 11 September 2001.
Mubarak was reelected 3 times by referenda in 1987, 1993 and 1999 with landslide votes supporting him. |
Kids love guessing games and ours are no exception. I remember teachers holding their fingers up behind their backs and the child who guessed the closest to the number of fingers they were holding up got a privilege, such as walking out first to recess. The game never got old because there was always a new opportunity to guess. This continued into higher grades where teachers would write down a number and have students guess. In kindergarten and preschool, guessing the number is exciting enough and needs no further reward.
How many gems am I holding? Pre-K and Kinder Guessing Game From 0 to 5
You will need:
- 5 glass gems or other small objects
- That’s it!
This game is so simple that it requires no set-up except gathering five gems or other small items. If you use something other than gems, it should be easy for the children to grasp without dropping. Tell them you are going to put between zero and five gems into their hands which they will hide behind their backs. They will not know how many gems are in their hands. Other students will each take turns guessing how many gems there are and then the student with the gems (or no gems) will take his hands out from behind his back and open them for all to see and determine who was the closest. The game continues until each child has had a turn. This is a great game for rainy days and can be done spontaneously as it requires no prep time! You can also change it up to be a guessing game about colors by placing a colored bear or other object into their hands and have everyone guess the color.
MORE Hands-on, Playful ways to teach the numbers 0-5 with these activities from the Early Childhood Education Team. Follow the hashtag #TeachECE to find all of the FREE learning activities from the team.
Learning 2 Walk |
Computer virus is a term given to man-made computer software or system to destroy computer programs or computer. Virus destroys data, useful application, programs and even the operating system. Computer virus hide themselves in other host files. They are not visible to us. The viruses can corrupt, delete files and programs.
The first computer virus "C-Brain" was developed in 1987 to stop illegal reproduction of software developed from Alvi's Brain Computer shop.
But nowadays, viruses are developed for the following purposes:
Computer Viruses affect the computer in many ways. It can destroy data, files or programs and cause the system malfunction. The destructive effects of viruses are different according to their types.
Some of the effects of virus are as follows:
When a host file or program is used, the virus become active and tries to infect other files.
Computer viruses can spread from one computer to another computer by the following ways:
All the computer viruses have different characteristics. But all viruses affects the computer anyway.
If the computer is affected by the viruses, we can find the following symptoms:
Computer virus is a destructive program that disturbs the normal functioning of a computer. It is designed intentionally to harm data, information and programs of the computer. When a virus enters in file or program, it becomes active and performs its destructive tasks.
Any four reasons for creating computer viruses are as follows:
Any four destructive effects of computer viruses are as follows:
Computer viruses can spread from one computer to another computer by following ways:
Any four symptoms of computer viruses are as follows:
System infector virus infects the various parts of the operating system or master control program software.
Some reasons for creating viruses are:
Viruses spread from one computer to other computers by following ways: |
Before we can talk about hybrid engine power, let's first explain how hybrid engines work. Hybrid engines combine two different sources of power to move the vehicle. The first source is the traditional internal combustion engine, which produces power by burning fuel, usually gasoline. The second source is usually an electric motor that gets its power from a battery pack within the vehicle. The engine and the electric motor work together to produce the power the vehicle needs to operate. However, the internal combustion engine in a hybrid car is typically much smaller than usual for efficiency and to accommodate the electric motor. This dichotomy raises a big question for green driving enthusiasts wanting to combine engine performance and fuel economy: How can a hybrid engine create more power?
One way to boost hybrid power is to update the batteries. For instance, the battery pack used in the third-generation Toyota Prius is smaller and more efficient [source: Garrett] than those in previous versions of the car, which gives it a slightly higher power output [Source: Voelcker]. The second generation's battery pack was rated for 28 horsepower, compared to the third generation's 36 horsepower [source: Toyota]. Although it's only a slight power improvement over earlier generations, battery technology is moving towards lithium-ion batteries that can potentially produce even more power in the near future.
These new batteries have plenty of upside: Lithium-ion batteries can produce more power in the same amount of space, because lithium has a greater energy density than the nickel metal hydride batteries used in most hybrids -- and it weighs less as well. Due to these advantages, you may start to see lithium-ion batteries used in hybrid vehicles relatively soon.
In addition to installing more powerful batteries, the electric motors can increase their power by upping the voltage. The 2010 Prius increases its power from a 500-volt system in the previous version, to 650 volts in the redesigned model [Source: Toyota].
It's easy to see that an increase in electric motor power, combined with a powerful combustion engine, is just the recipe needed to create a hybrid that can rival traditional vehicles.
Go on to the next page to see how hybrids can attain serious horsepower output while managing to keep their fuel efficiency up. |
2.2) Draw a simple structure to illustrate a nucleotide and clearly indicate the 3´, 5´ ends on the sugar where the phosphodiester linkage would form.
DNA is a double helix structure in which two strands are linked together by hydrogen bonds between complementary bases. The two strands run in opposite direction. The two nucleotide are linked through 3'_5' phosphodiester linkage to from dinucleotide. In this the phosphodiester bond is present between the 3' carbon atom of one sugar molecule and the 5'. Carbon atom of another atom of the sugar molecule. |
Presidential elections always come with a bit of apprehension, but this year’s upcoming election carries significantly more heated rhetoric and tension. In recent years, psychologists and child experts have repeatedly documented how political discourse has had a marked impact on youth in America. This can manifest as general anxiety and worry about their families and lead to creating rifts in friend groups, much like politics sometimes does among adults. An element that adds stress to this dynamic for youth, in particular, is they have no say in the electoral process, and often find it difficult or confusing to understand. We’ve compiled some resources here to help you discuss the electoral process with your mentee, so that you may ease some of that stress and help them better understand how elections work.
The easiest entry point for a discussion like this is starting with values that are important to your mentee. Explore with them what is important to them and why. From there, you can collectively think about how those values play out in local and national politics. Are those values being addressed? How and why? Are people from their community being heard? If not, what can be done about that? This is a good place to learn about the political process. If your mentee has some questions about your personal political beliefs and you are unsure how to respond, you can always turn the question back to them and ask them how they feel about that topic and remember to stay age and developmentally appropriate in your conversations. If you still have questions about the best way to move forward with that discussion, when you have a moment and some privacy, check in with your mentor director on the best ways to navigate that conversation.
Luckily for us, a lot of youth-serving organizations have already put together all kinds of fantastic resources for students to learn about elections and politics. Scholastic has a wonderful website with some introductory lessons on how government works, democracy in action, and even some stories about laws that kids helped pass! This is a great resource for grades 4 – 6. Working with an even younger student? Here is a list of books to read, including for early readers, that help explain elections. Older students have covered some of the basic workings of elections and government in school, and are often more interested in ways that they can have a voice. PBS has an outstanding collection of resources for middle and high school students that delves into the deeper understandings of issues such as the party system, a history of voter rights, current party platforms, and even media literacy — a crucial tool for being able to navigate the abundance of information we encounter every day. Take some time to read through some of these articles together and discuss them with your mentee.
Remember, kids pick up a lot from the adults around them. They may have strong feelings about particular candidates or policies mostly because they’ve heard their adults speaking about them. If they voice a strong opinion, focus on their feelings and help them understand what it is that’s making them have a strong reaction to a policy or politician. They may not be able to vote, but there could be something the two of you could do together to take action on an issue that is important to them (maybe learning more about the topic together, learning how to be a better ally to someone who is being discriminated against, forming a club at school). Focusing on actions that you can take is beneficial for a lot of reasons: it builds leadership skills, it creates a sense of self-confidence, and it is practice for being an engaged community member as an adult. |
Any equation that relates the first power of x to the first power of y produces a straight line on an x-y graph. The standard form of such an equation is Ax + By + C = 0 or Ax + By = C. When you rearrange this equation to get y by itself on the left side, it takes the form y = mx +b. This is called slope intercept form because m is equal to the slope of the line, and b is the value of y when x = 0, which makes it the y-intercept. Converting from slope intercept form to standard form takes little more than basic arithmetic.
TL;DR (Too Long; Didn't Read)
To convert from slope intercept form y = mx + b to standard form Ax + By + C = 0, let m = A/B, collect all terms on the left side of the equation and multiply by the denominator B to get rid of the fraction.
The General Procedure
An equation in slope intercept form has the basic structure
If m is an integer, then B will equal 1.
(1) - The equation of a line in slope intercept form is:
What is the equation in standard form?
You can leave the equation like this, but if you prefer to make x positive, multiply both sides by -1:
(2) - The slope of a line is -3/7 and the y-intercept is 10. What is the equation of the line in standard form?
The slope intercept form of the line is
Following the procedure outlined above:
About the Author
Chris Deziel holds a Bachelor's degree in physics and a Master's degree in Humanities, He has taught science, math and English at the university level, both in his native Canada and in Japan. He began writing online in 2010, offering information in scientific, cultural and practical topics. His writing covers science, math and home improvement and design, as well as religion and the oriental healing arts. |
The Deaf community uses the lowercase deaf when referring to the audiological condition and the uppercase Deaf to refer to a group of deaf people who share a culture and a language: American Sign Language (ASL). The members of the Deaf community have inherited sign language as a result of a distinct culture throughout time. They use it as their primary method of communication and also hold a set of beliefs about their connection to society and their culture. Those who are deaf may choose to identify with the Deaf community or not. Those who are hearing may also identify as Deaf, for example, the hearing children of deaf parents.
Hard of hearing can describe a person with mild to moderate hearing loss. They may choose to identify with the Deaf community or the hearing community or be in between.
Archaic and/or offensive ways to refer to those who are deaf or hard of hearing include: hearing-impaired, deaf and dumb, and deaf-mute.
(Information taken from the National Association of the Deaf, "Community and Culture-Frequently Asked Questions")
According to the National Institute on Deafness and Other Communication Disorders, "Quick Statistics About Hearing:" |
31 July 2022
Mammals can produce their own body heat and control their body temperatures. This process is known as endothermy or warm-bloodedness.
Scientists believe that it may be the reason why mammals likely rule almost every ecosystem. Warm-blooded mammals are more active than cold-blooded animals. They can live in different environments, from the frozen arctic to the boiling desert. And they reproduce faster.
The soft tissues that would give information about warm- or cold-bloodedness are rarely preserved in fossils. So, paleontologists, or experts in the study of fossils, do not know exactly when mammals developed and changed into warm-blooded creatures.
A group of scientists tried to answer that question in a study recently published in Nature.
Ricardo Araújo is a paleontologist at the University of Lisbon. Araújo and a group of researchers proposed that the shape and size of the inner ear structures called canals could be used to study body temperature.
The movement of fluid through the ear canals helps the body to preserve balance and movement. This fluid in cold-blooded animals is cooler and thicker, meaning wider canals are needed. Warm-blooded animals have less ear fluid and smaller canals.
The research team suggested that as body temperature increased and the animals became more active, the shape and size of ear canals changed to preserve balance and movement.
The researchers compared ear canals in 341 animals. They said the ear canals showed that warm-bloodedness, or endothermy, appeared around 233 million years ago, millions of years later than some previous estimates.
Araújo said, "Endothermy is a defining feature of mammals, including us humans. Having a ... high body temperature regulates all our actions and behaviors."
But the first creatures that showed warm-bloodedness are not officially considered to be mammals. These ancient animals known as mammaliamorph synapsids had traits linked with mammals. The first true mammals, the researchers said, appeared roughly 30 million years later.
Importance of being warm-blooded
Ken Angielczyk of the Field Museum in Chicago is a co-leader of the study. He said, "Given how central endothermy is to so many aspects of the body plan, physiology and lifestyle of modern mammals, when it evolved in our ancient ancestors has been a really important unsolved question..."
Endothermy evolved at a time when important elements of the mammal body plan were falling into place, including changes to the backbone, breathing system, and hearing system.
Having warm-bloodedness also helped mammals at an important evolutionary moment when dinosaurs and flying reptiles first appeared on Earth. And mammals took over after the dinosaur mass extinction event 66 million years ago. Among today's animals, mammals and birds are warm-blooded.
"It is maybe too far-fetched, but interesting, to think that the onset of endothermy in our ancestors may have ultimately led to the construction of the Giza pyramids or the development of the smartphone," Araújo said.
"If our ancestors would have not become independent of environmental temperatures, these human achievements would probably not be possible."
I'm John Russell.
John Russell adapted this story for VOA Learning English based on Nature, Scientific American and Reuters news reports.
Words in This Story
ecosystem – n. everything that exists in a particular environment
preserve – v. to keep (something) in its original state or in good condition
fossil – n. something (such as a leaf, skeleton, or footprint) that is from a plant or animal which lived in ancient times and that you can see in some rocks
feature – n. an interesting or important part, quality, ability, etc.
regulate – v. to set or adjust the amount, degree, or rate of (something)
trait – n. a quality that makes one person or thing different from another
aspect – n. a part of something
evolve – v. to change or develop slowly often into a better, more complex, or more advanced state : to develop by a process of evolution
far-fetched – adj. not likely to happen or be true |
Study Guide for Essentials of Anatomy & Physiology
The all-new Study Guide for Essentials of Anatomy & Physiology offers valuable insights and guidance that will help you quickly master anatomy and physiology. This study guide features detailed advice on achieving good grades, getting the most out of the textbook, and using visual memory as a learning tool. It also contains learning objectives, unique study tips, and approximately 4,000 study questions with an answer key – all the tools to help you arrive at a complete understanding of human anatomy.
- Study guide chapters mirror the chapters in the textbook making it easy to jump back and forth between the two during your reading.
- Approximately 4,000 study questions in a variety of formats – including multiple choice, matching, fill-in-the-blank, short answer, and labeling – reinforce your understanding of key concepts and content.
- Chapters that are divided by the major topic headings found in the textbook help you target your studies.
- Learning objectives let you know what knowledge you should take away from each chapter.
- Detailed illustrations allow you to label the areas you need to know.
- Study tips offering fun mnemonics and other learning devices make even the most difficult topics easy to remember.
- Flashcard icons highlight topics that can be easily made into flashcards.
- Answer key lists the answers to every study question in the back of the guide.
Unit 1: Constituents of the Human Body
1. Organization of the Human Body
2. The Chemistry of Life
3. Anatomy of Cells
4. Physiology of Cells
5. Cell Growth and Reproduction
6. Tissues and Their Functions
Unit 2: Support and Movement
7. Skin and Its Appendages
8. Skeletal Tissues
9. Bones and Joints
10. Muscular System
Unit 3: Communication, Control, and Integration
11. Cells of the Nervous System
12. Central Nervous System
13. Peripheral Nervous System
14. Sense Organs
15. Endocrine System
Unit 4 Transportation and Defense
17. Anatomy of the Cardiovascular System
18. Physiology of the Cardiovascular System
19. Lymphatic and Immune Systems
Unit 5 Respiration, Nutrition, and Excretion
20. Respiratory System
21. Digestive System
22. Nutrition and Metabolism
23. Urinary System and Fluid Balance
Unit 6 Reproduction and Development
24. Male Reproductive System
25. Female Reproductive System
26. Growth and Development
27. Human Genetics and Heredity
Andrew Case, RN, APRN, MSN, LT, USNR |
This article is about Machine Learning
How to Evaluate Machine Learning Algorithms
By NIIT Editorial
Published on 28/06/2021
Machine learning (ML) is the study of computer algorithms that automatically enhance functions through experience and data use. It is a component of artificial intelligence. Machine learning algorithms are programs (math and logic) that modify themselves to perform satisfactorily as they are disclosed to more data. The “learning” aspect of machine learning implies that the programs change how they process data over time, much as humans change how they process data by learning. Machine learning algorithms are used in a vast variety of applications, such as in medicine, email filtering, speech distinction, and computer vision, where it is tough to develop traditional algorithms to perform the needed duties.
Different learning styles in machine learning algorithms
1. Supervised Learning - This algorithm contains a target/outcome variable (or dependent variable) that is foreseen from a given set of forecasters (independent variables). Using this set of variables, we produce a function that maps inputs to requested outputs. The training process proceeds until the model accomplishes the desired level of accuracy on the training data.
Examples of Supervised algorithms include: Logistic Regression and the Back Propagation Neural Network.
2. Unsupervised Learning - In this algorithm, we work without a target or outcome variable to predict / estimate. It is used for congregating populations in various groups, which is widely used for segmenting customers in different groups for a particular intervention.
Example algorithms include: The Apriori algorithm and K-Means.
3. Reinvestment Learning - In this type of algorithm, the machine is instructed to make specific decisions. It follows the process where the machine is exposed to a setting where it trains itself repeatedly using the trial and error method. In this process, the machine learns from experience and tries to apprehend the best logical knowledge to make accurate business decisions.
Example of Reinforcement Learning: Markov Decision Process.
Commonly Used Machine Learning Algorithms
1. Linear Regression
It is used to evaluate actual values (cost of products, number of calls, total sales etc.) based on a constant variable(s). The process is to establish a relationship between independent and dependent variables by fitting the suitable line. This suitable fit line is known as the regression line and is represented by a linear equation.
Y= a *X + b.
In this equation:
Y – Dependent Variable
a – Slope
X – Independent variable
b – Intercept
These coefficients a and b are concluded based on minimizing the sum of squared difference of distance between data points and regression line.
2. Logistic Regression
It is used to calculate distinct values (Binary values like 0/1, yes/no, true/false) based on a given set of the independent variable(s). In other words, it foresees the probability of an event by fitting data to a logit function. Hence, it is also known as logit regression. Since it foresees the probability, its output values lie between 0 and 1 (as expected).
Coming to the math, the log odds of the outcome is designed as a linear combination of the predictor variables.
Above, p represents the probability of the existence of a characteristic of interest. It uses the method where to maximise the probability of observing the sample values rather than minimizing the sum of squared errors (like in ordinary regression).
3. Decision Tree
This algorithm is a type of supervised learning algorithm that is mainly used for classifying problems. The method is suitable for both categorical and continuous dependent variables. In this algorithm, the process of splitting the population into two or more homogeneous sets is used. This is accomplished based on the most significant attributes/ independent variables to make as distinct groups as feasible.
To divide the population into different heterogeneous groups, it uses various techniques such as Gini, Information Gain, Chi-square, Entropy etc.
4. kNN (k- Nearest Neighbors)
K- nearest neighbors is a simple algorithm that carries all available cases and categories, new cases by a majority vote of its k neighbours. It is used for both classification and regression, and in both cases, the input contains the k closest training measures in the data set. The outcome depends on whether k-NN is used for classification or regression.
In k-NN classification, the outcome is a class membership. K is a positive integer, normally small. If k = 1, then the object is simply entrusted to the class of that single nearest neighbor.
In k-NN regression, the outcome is the property value for the object. This value is considered as the average of the values of k nearest neighbours.
5. Naive Bayes
This classification technique is based on Bayes’ theorem with an assumption of independence between predictors. In reasonable terms, a Naive Bayes classifier determines that the presence of a specific feature in a class is irrelevant to the presence of any other feature.
The Naive Bayesian model is simple to build and specifically useful for very large data sets. Along with simplicity, Naive Bayes is known to outshine even highly sophisticated classification methods.
Bayes theorem allows a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c). Look at the equation below:
- P(c|x) is the posterior probability of class (target) given predictor (attribute).
- P(c) is the prior probability of class.
- P(x|c) is the likelihood which is the probability of the predictor given class.
- P(x) is the prior probability of the predictor.
6. SVM (Support Vector Machine)
In this algorithm, we plan each data item as a point in n-dimensional space (where n is the number of features we have). Here, the value of each feature is the value of a particular coordinate. It is a classification method.
In today's world, where almost all manual tasks are being handled automatically, the definition of manual is evolving. Machine Learning algorithms enable computers to perform any given tasks like playing chess, perform surgeries, in a smarter way. Choosing the accurate machine learning algorithm depends on various factors including the size of the data, quality, and multiplicity, as well as what your clients want to develop from that data. Therefore, choosing the right algorithm is a sum of all the combinations of business needs, specification, investigation, and, of course, time availability.
Advanced PGP in Data Science and Machine Learning (Full Time)
Become an industry-ready StackRoute Certified Data Science professional through immersive learning of Data Analysis and Visualization, ML models, Forecasting & Predicting Models, NLP, Deep Learning and more with this Job-Assured Program with a minimum CTC of ₹5LPA*.
Job Assured Program* |
|The ocean has always been a huge, for the most part still unexplored world full of mysterious species / Photo by: Max Pixel|
The ocean has always been a huge, for the most part still unexplored world full of mysterious species. This despite thousands of studies that have been conducted and discovered new species, learned how they live underwater, and their impacts on the world’s biodiversity. Comparatively, we have only a few studies that focused on marine microbes, the tiny organisms that live in water environments and can only be seen under a microscope.
Marine microbes are invisible to the naked eye, and these include bacteria, viruses, archaea, protists, and fungi. Previous studies showed that 90 percent of the weight of all living organisms in the ocean comes from microbes. However, just because they aren’t easily seen doesn’t mean they are less significant. These microorganisms are often the engines of ecosystems that otherwise would not have access to the food and nutrients they need. They also clean the oceans and defend ecosystems against harmful viruses.
Microbes are essential for a thriving ocean ecosystem. Without them, the world we know would not exist. Experts reported that the microbial world accounted for almost 50 to 90 percent of Earth’s history, with life itself likely beginning in the ocean. Not only do they account for the majority of ocean biomass and constitute a hidden majority of life that flourishes in the sea, but microbes are also the major primary producers in the ocean.
These marine microorganisms dictate much of the flow of marine energy and nutrients, influence our climate, and provide us with a source of medicines and natural products. They exist everywhere, helping to shape the features of our planet, past and present. For instance, bacteria and unicellular algae play a particularly important role in the general economy of our planet. Not only do they form the basis of the food chains, but they also recycle almost all the organic matter of the planet.
The Role of Rhodopsins in Regulating Climate
It has been established that almost all sunlight in the ocean is captured by chlorophyll in algae. But a recent study conducted by researchers from the University of Southern California discovered that rhodopsins, a sunshine-grabbing pigment, are the ones responsible for trapping the sunlight. This kind of marine microbes was discovered by scientists at USC about 20 years ago. They found out that rhodopsins have light-sensitive protein systems in their cell membranes that trap sunlight.
Science Daily, an American website that aggregates press releases and publishes lightly edited press releases about science, reported that the researchers explored the eastern Atlantic Ocean and the Mediterranean Sea in 2014. They collected sampled microorganisms in the water column down to 200 meters, hoping to determine how widespread rhodopsins are and in what conditions they are favored. The team found out that rhodopsins are more abundant than once thought. They outperform algae at capturing light in oligotrophic zones.
The study published in Science Advances discovered that unlike algae that use sunlight and CO2 to produce organic material and oxygen, these microbes use light to make adenosine triphosphate, the basic energy currency that drives many cellular processes. Laura Gómez-Consarnau, assistant professor (research) of biology at the USC Dana and David Dornsife College of Letters, Arts, and Sciences, stated that the ocean will be more nutrient-poor due to the abundance of rhodopsins and the increasing global temperature.
"So, with fewer nutrients near the surface, algae will have limited photosynthesis, and the rhodopsin process will be more abundant. We may have a shift in the future, which means the ocean won't be able to absorb as much carbon as it does today. So more CO2 gas may remain in the atmosphere, and the planet may warm faster,” Gómez-Consarnau said.
|It has been established that almost all sunlight in the ocean is captured by chlorophyll in algae. But a recent study conducted by researchers from the University of Southern California discovered that rhodopsins, a sunshine-grabbing pigment, are the ones responsible for trapping the sunlight / Photo by: Max Pixel|
Phytoplankton and Global Warming
A recent study by researchers from the Tara Oceans expedition, an international, interdisciplinary enterprise that collected 35,000 samples from all the world's oceans between 2009 and 2013, emphasized plankton as a major contributor to marine ecosystems in terms of biomass, abundance, and diversity. Unfortunately, planktonic species are being distributed unevenly across the ocean due to global warming.
Lucie Zinger, the co-senior author of the Institut de Biologie de l'Ecole Normale Superieure (IBENS) in Paris, stated that higher oceanic temperatures are likely to cause a "tropicalization" of the temperate and polar oceanic regions. It is expected that there will be an increased diversity of planktonic species with higher water temperatures. This could alter the associated ecosystems and have serious consequences worldwide.
"This analysis allowed us to study not only what ocean microbes are capable of doing, but also what they actually do at a global scale,” author Shinichi Sunagawa at the ETH Zürich, Switzerland said.
|Marine microbes provide a helpful insight not only to prove how these microorganisms shape our ecosystems but also show the impacts of climate change on them / Photo by: Ivan Pellacani via Wikimedia Commons.|
Phytoplankton, microscopic marine algae that provide food for a wide range of sea creatures, are also affected by global warming. A lot of people are unaware that they supply half the oxygen we breathe and they are a cornerstone of the ocean food web. According to The Conversation, an online site that offers informed commentary and debate on the issues affecting our world, phytoplankton use carbon dioxide to manufacture their own food.
However, global warming causes the ocean surface to become less dense. Phytoplankton in the warm top layer would starve without replenishment of the nutrient fertilizer from below. This leads to reduced primary production and a corresponding decrease in carbon pumping to the deep sea.
Marine microbes provide a helpful insight not only to prove how these microorganisms shape our ecosystems but also show the impacts of climate change on them. These studies provide an opportunity for researchers to learn more about ocean microscopic diversity and determine how they help in mitigating the impacts of climate change. |
To train a deep learning network, use
This topic presents part of a typical multilayer shallow network workflow. For more information and other steps, see Multilayer Shallow Neural Networks and Backpropagation Training.
When the network weights and biases are initialized, the network is ready for training.
The multilayer feedforward network can be trained for function approximation (nonlinear
regression) or pattern recognition. The training process requires a set of examples of
proper network behavior—network inputs
p and target outputs
The process of training a neural network involves tuning the values of the weights and
biases of the network to optimize network performance, as defined by the network
net.performFcn. The default performance function
for feedforward networks is mean square error
mse—the average squared error between
the network outputs
a and the target outputs
t. It is
defined as follows:
(Individual squared errors can also be weighted. See Train Neural Networks with Error Weights.) There are two different ways in
which training can be implemented: incremental mode and batch mode. In incremental mode,
the gradient is computed and the weights are updated after each input is applied to the
network. In batch mode, all the inputs in the training set are applied to the network
before the weights are updated. This topic describes batch mode training with the
train command. Incremental training with the
adapt command is discussed in Incremental Training with adapt. For most
problems, when using the Deep Learning Toolbox™ software, batch training is significantly faster and produces smaller errors
than incremental training.
For training multilayer feedforward networks, any standard numerical optimization algorithm can be used to optimize the performance function, but there are a few key ones that have shown excellent performance for neural network training. These optimization methods use either the gradient of the network performance with respect to the network weights, or the Jacobian of the network errors with respect to the weights.
The gradient and the Jacobian are calculated using a technique called the backpropagation algorithm, which involves performing computations backward through the network. The backpropagation computation is derived using the chain rule of calculus and is described in Chapters 11 (for the gradient) and 12 (for the Jacobian) of [HDB96].
As an illustration of how the training works, consider the simplest optimization algorithm — gradient descent. It updates the network weights and biases in the direction in which the performance function decreases most rapidly, the negative of the gradient. One iteration of this algorithm can be written as
where xk is a vector of current weights and biases, gk is the current gradient, and αk is the learning rate. This equation is iterated until the network converges.
A list of the training algorithms that are available in the Deep Learning Toolbox software and that use gradient- or Jacobian-based methods, is shown in the following table.
For a detailed description of several of these techniques, see also Hagan, M.T., H.B. Demuth, and M.H. Beale, Neural Network Design, Boston, MA: PWS Publishing, 1996, Chapters 11 and 12.
Scaled Conjugate Gradient
Conjugate Gradient with Powell/Beale Restarts
Fletcher-Powell Conjugate Gradient
Polak-Ribiére Conjugate Gradient
One Step Secant
Variable Learning Rate Gradient Descent
Gradient Descent with Momentum
The fastest training function is generally
trainlm, and it is the default training function for
feedforwardnet. The quasi-Newton method,
trainbfg, is also quite fast. Both of these methods tend to be less
efficient for large networks (with thousands of weights), since they require more memory
and more computation time for these cases. Also,
trainlm performs better on function fitting (nonlinear regression)
problems than on pattern recognition problems.
When training large networks, and when training pattern recognition networks,
trainrp are good choices. Their memory requirements are relatively small,
and yet they are much faster than standard gradient descent algorithms.
See Choose a Multilayer Neural Network Training Function for a full comparison of the performances of the training algorithms shown in the table above.
As a note on terminology, the term “backpropagation” is sometimes used to refer specifically to the gradient descent algorithm, when applied to neural network training. That terminology is not used here, since the process of computing the gradient and Jacobian by performing calculations backward through the network is applied in all of the training functions listed above. It is clearer to use the name of the specific optimization algorithm that is being used, rather than to use the term backpropagation alone.
Also, the multilayer network is sometimes referred to as a backpropagation network. However, the backpropagation technique that is used to compute gradients and Jacobians in a multilayer network can also be applied to many different network architectures. In fact, the gradients and Jacobians for any network that has differentiable transfer functions, weight functions and net input functions can be computed using the Deep Learning Toolbox software through a backpropagation process. You can even create your own custom networks and then train them using any of the training functions in the table above. The gradients and Jacobians will be automatically computed for you.
To illustrate the training process, execute the following commands:
load bodyfat_dataset net = feedforwardnet(20); [net,tr] = train(net,bodyfatInputs,bodyfatTargets);
Notice that you did not need to issue the
configure command, because the configuration is done automatically by the
train function. The training window will
appear during training, as shown in the following figure. (If you do not want to have
this window displayed during training, you can set the parameter
false. If you want
training information displayed in the command line, you can set the parameter
This window shows that the data has been divided using the
dividerand function, and the Levenberg-Marquardt (
trainlm) training method has been used with the mean square error
performance function. Recall that these are the default settings for
During training, the progress is constantly updated in the training window. Of most
interest are the performance, the magnitude of the gradient of performance and the
number of validation checks. The magnitude of the gradient and the number of validation
checks are used to terminate the training. The gradient will become very small as the
training reaches a minimum of the performance. If the magnitude of the gradient is less
than 1e-5, the training will stop. This limit can be adjusted by setting the parameter
net.trainParam.min_grad. The number of validation checks
represents the number of successive iterations that the validation performance fails to
decrease. If this number reaches 6 (the default value), the training will stop. In this
run, you can see that the training did stop because of the number of validation checks.
You can change this criterion by setting the parameter
net.trainParam.max_fail. (Note that your results may be different
than those shown in the following figure, because of the random setting of the initial
weights and biases.)
There are other criteria that can be used to stop network training. They are listed in the following table.
Minimum Gradient Magnitude
Maximum Number of Validation Increases
Maximum Training Time
Minimum Performance Value
Maximum Number of Training Epochs (Iterations)
The training will also stop if you click the Stop Training
button in the training window. You might want to do this if the performance function
fails to decrease significantly over many iterations. It is always possible to continue
the training by reissuing the
train command shown above. It will
continue to train the network from the completion of the previous run.
From the training window, you can access four plots: performance, training state, error histogram, and regression. The performance plot shows the value of the performance function versus the iteration number. It plots training, validation, and test performances. The training state plot shows the progress of other training variables, such as the gradient magnitude, the number of validation checks, etc. The error histogram plot shows the distribution of the network errors. The regression plot shows a regression between network outputs and network targets. You can use the histogram and regression plots to validate network performance, as is discussed in Analyze Shallow Neural Network Performance After Training.
After the network is trained and validated, the network object can be used to calculate the network response to any input. For example, if you want to find the network response to the fifth input vector in the building data set, you can use the following
a = net(bodyfatInputs(:,5))
a = 27.3740
If you try this command, your output might be different, depending on the state of your random number generator when the network was initialized. Below, the network object is called to calculate the outputs for a concurrent set of all the input vectors in the body fat data set. This is the batch mode form of simulation, in which all the input vectors are placed in one matrix. This is much more efficient than presenting the vectors one at a time.
a = net(bodyfatInputs);
Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Shallow Neural Network Generalization and Avoid Overfitting. |
ADHD - Condition and Symptoms
Attention-Deficit Hyperactivity Disorder (ADHD) is a chronic neuropsychiatric condition that affects both children and adults characterized by an inability to focus attention and complete actions. ADHD symptoms range from moderate to extreme and often impact a person’s ability to complete educational goals, retain a job, and sustain interpersonal relationships. The coping mechanisms of individuals suffering from ADHD are easily overwhelmed, and their actions often seem chaotic and disorganized to others. Adults affected by ADHD often struggle with associated conditions such as depression, anxiety, and substance abuse.
Up to 60% of children diagnosed with ADHD continue to suffer from this condition as adults. Adult ADHD is also referred to as Adult ADHA, Adult ADD, and AADD. While many of the symptoms of ADHD can be evidenced by people not suffering from the condition, particularly during periods of fatigue or high stress, individuals affected by ADHD will have exhibited since childhood multiple combinations of symptoms so severe that they continually interfere with their lives.
Is ADHD A Disability?
ADHD is a disability in the United States under the Rehabilitation Act of 1973, section 504, and the Americans with Disabilities Act (ADA). For example, if ADHD is severe and interferes with a person's capacity to work or engage in the public sector, it is deemed a protected disability.
There are three types of ADHD: the inattentive type, the hyperactive-impulsive type, and a combined type. Type classification of the disorder simply indicates the preponderance of similar symptoms (more of a tendency toward the inability to pay attention, more of a tendency to restless behavior and mental activity, or a combination of both).
There is no specific test that diagnoses ADHD. Instead, the physician pieces together evidence of symptoms and determines how long these symptoms have been a limiting factor in a patient’s life. In general, to be diagnosed as having ADHD, symptoms must have been present since early childhood (prior to age 7) and must be severe enough to have interfered with at least two areas of an individual’s life.
ADHD always starts in childhood, although it may not have been diagnosed or treated. Often, a genetic link may be inferred if there is a family history of ADHD, ADHD-type symptoms, learning disabilities, mood disorders, or substance abuse.
In order to substantiate a diagnosis, a physician may supplement a patient’s medical and behavioral history with a neuropsychiatric evaluation, which may include WAIS, BADDS, and/or WURS tests. These tests are used to create some objective evidence of ADHD and to rule out other conditions such as depression, anxiety, and substance abuse. A physical evaluation may be used to rule out diseases such as hyperthyroidism which can result in symptoms similar to ADHD.
Individuals suffering from Attention-Deficit Hyperactivity Disorder will often have a history of frequent behavior problems, and reports from school and work situations often state that a person has not lived up to their potential. A common indicator of ADHD is a history of bedwetting past the age of 5. Other common symptoms of the condition include short attention span, inattention to detail, being easily distracted or bored, physical restlessness, inability to listen to and follow directions, anxiety, impulsive actions and speech, low tolerance for frustration, poor organizational skills, being easily overwhelmed by ordinary tasks, procrastination, inability to finish a task, a chronic sense of underachievement and poor self esteem, mood swings, trouble sustaining friendships or intimate relationships, a need for high stimulation (doing many things at once or thrill seeking), a tendency to worry needlessly and endlessly, poor writing and fine motor skills, poor coordination, performance anxiety, difficulty falling asleep and difficulty coming awake, low energy, and hypersensitivity to noise and touch.
The causes of ADHD are not known for certain. Among the possible causes are genetic factors, brain injury, prenatal smoking and alcohol use by the mother, exposure to high levels of lead, sugar, and food additives such as artificial colors and preservatives.
ADHD is treated with a combination of medication, behavior therapy, cognitive therapy, and skills training.
Filing for Social Security Disability with an ADHD Diagnosis
The fact that ADHD always begins in early childhood is important because while ADHD is listed by the Social Security Administration (SSA) under Section 112.11 of the Blue Book, Attention Deficit Hyperactivity Disorder, the listing applies to children. There is no similar section for adults. If you are able to prove that you have had ADHD since childhood, and if you can show that this condition has impaired your ability to do schoolwork as a child and to be gainfully employed as an adult, your condition may be considered severe enough to get disability benefits.
An ADHD diagnosis, in and of itself, is not enough to qualify for disability benefits. As a child, you must have had measurable functional impairments (which show up as recurring poor performance in school) and as an adult, you must have measurable functional impairments that keep you from working. You must also meet the requirements of both Paragraph A and Paragraph B below. (Although both paragraphs apply to childhood ADHD, it is advisable that be sure you meet the same requirements in order to be eligible for benefits as an adult.)
You must possess acceptable medical documentation which finds that you have all three of the following symptoms:
- Marked inattention; and
- Marked impulsiveness; and
- Marked hyperactivity.
You must possess acceptable supporting documentation that shows you have at least two of the three following conditions, resulting from ADHD:
- Marked impairment in age-appropriate cognitive/communication function; and/or
- Marked impairment in age-appropriate social functioning; and/or
- Marked impairment in age-appropriate personal functioning.
Acceptable documentation means medical findings (a physician’s or psychiatrist’s treatment notes), historical information (discussed above), and standardized test results (IQ, achievement, etc.).
Because determination of an ADHD diagnosis is quite subjective, it can be difficult to win disability benefits based solely on this condition. The determination of disability relies to a great extent on the opinions of those who have contributed to your historical documentation, such as teachers and employers. As individual opinions, based on personal observations made long before the time of the disability case review, can often vary greatly and are always open to interpretation, they provide a far weaker foundation for an SSDI claim than objective physical and medical evidence.
Your ADHD Disability Case
If you are disabled because of severe ADHD symptoms that prevent you from working, and if you have sufficient supporting documentation, you may well be entitled to Social Security Disability (SSDI) benefits. Although total disability based on an ADHD diagnosis can be difficult to prove compared to other disabling conditions, working closely with medical professionals and a qualified Social Security Disability attorney or advocate to collect and present the appropriate documentation to support your claim in front of the Disability Determination Services (DDS) can help to ensure that your ADHD disability case will have the highest possible chance of success. |
Grade 5 | Science | Global warming and natural calamities, Environment, pollution and calamities, Olympiad, CBSE, ICSE, SOF, ITO
Global warming is a phenomenon which is causing natural imbalance due to increase in greenhouse gases.
The primary greenhouse gases in Earth's atmosphere are water vapor, carbon dioxide, methane, nitrous oxide, and ozone.
This is causing the temperature of the earth to rise. Glaciers are melting and sea level is rising.
Global warming is resulting in climate changes and unpredictable climate conditions like droughts and floods.
Causes of global warming:
A) Smoke emissions from cars and industries
C) Burning of fossil fuels
Ways to fight global warming:
A) We should use natural sources of energy like solar energy, wind energy and tidal energy as much as possible.
B) We should plant more trees.
C) We should use geothermal energy.
Flood is a common natural calamity. It occurs when an area of land gets submerged into the water due to accumulation of water or heavy inflow of water. The heavy force of flood can carry away anything on the land affected by flood be it anything like humans, animals, vehicles like cars or even trees.
Floods are caused by:
A) Heavy rains
B) Tropical storms and cyclones
E) High tides
People in flood-prone areas build houses on elevated platform. These houses known as stilt houses.
Flood water exerts high force and strong water current can drag anything so even if you are trained swimmer you should not go in flood water.
Drought is caused by scanty or no rainfall for a long duration.
Frequent droughts can result in famine. We should use rainwater wisely.
In India, we depend on Monsoon rains.
Poor monsoon rains can result in drought.
Earthquake is a natural calamity. It causes the land to shake. Its intensity is measured in Richter scale. Earthquakes high intensity can cause severe damage. Earthquake around oceans and sea can cause Tsunami.
Tsunamis are large ocean waves caused by shaking of ocean floor due to an underwater earthquake or a volcanic explosion.
Tsunamis are different than tidal waves.
Tidal waves are not as large as tsunamis.
Tidal waves are caused by the gravitation pull of the moon, sun, and planets upon the tides.
Landslide is the movement of a mass of earth or rocks down the slope from a mountain or cliff.
A landslide can occur because of :
1) Heavy rain |
Lab 7: Transcription and Translation
• To build a DNA molecule
• To simulate DNA replication
• To simulate transcription by building an mRNA molecule from DNA
• To simulate translation by building a polypeptide chain from the mRNA transcript
• Pen and paper or computer.
Description of the problem
You will build a DNA and RNA molecule and simulate the processes of DNA replication, transcription, and translation. You may find it helpful to refer to the figures in Ch. 22 from your book.
Building a DNA molecule
DNA is composed of monomers of nucleotides. Each nucleotide contains a 5-carbon sugar (=
deoxyribose), a phosphate, and a nitrogenous base (adenine, cytosine, guanine, or thymine).
Since DNA is a double-stranded molecule, adenine (A) on one strand always pairs with thymine
(T) on the other strand, whereas cytosine (C) always pairs with guanine (G).
- What molecules make up the backbone of DNA? What four molecules make up the “rungs” of the DNA “ladder” (be specific)? Build a DNA molecule that is 6 base pairs long by choosing 6 bases for one strand and the complementary bases for the second strand.
- For the molecule you just built, write the sequence of bases (A, C, G, T) you used on one strand and the complementary bases on the other strand.
Simulating DNA replication
DNA undergoes semi-conservative replication.
Before this begins, DNA must first “unzip” (the
picture at right is a simulation of this occurring).
- What bonds are broken in order to unzip the DNA?
Unzip your DNA molecule. Attach new complementary nucleotides to each strand using
complementary base pairing.
- What is meant by semi-conservative replication?
Like DNA, RNA is composed of monomers of nucleotides. Each nucleotide contains a 5-carbon
sugar (= ribose), a phosphate, and a nitrogenous base. Unlike DNA, RNA is a single-stranded
- What are the 4 bases in RNA? DNA serves as a template to make mRNA. Adenine (A)
in DNA always pairs with what base in mRNA?
One strand of DNA serves as the template to make mRNA in the process known as transcription.
Transcription is the first step in expressing a gene.
- Extend your original DNA molecule by making it 12 (instead of 6) base pairs long. What
is the base sequence of your two DNA strands (they should be complementary with one
- Choose one strand of DNA from #6 to serve as the template to make mRNA. What is the
sequence of bases in your mRNA transcript?
Once mRNA is made, it will eventually leave the nucleus and go to the ribosomes in the
A sequence of three bases in mRNA codes for a particular amino acid. The process of
converting the sequence of bases in RNA into a sequence of amino acids in a protein is called
translation. Translation occurs on the ribosomes in the cytoplasm of the cell.
- What is the name of each three-base sequence in mRNA?
In cells, tRNA with an anticodon complementary to the codon in mRNA will transport its
attached amino acid to the ribosome. The next mRNA codon is “read” by the ribosome, whereby
another tRNA with the correct anticodon and its associated amino acid will attach. The two
amino acids are joined together with a peptide bond. This process is repeated until the last codon
in the mRNA is read. The end result will be a polypeptide chain composed of many amino acids.
- Using the mRNA transcript you generated in #7, what are the first three amino acids of
your polypeptide chain? (See Fig. 22.6 in your book.)
- What happens to the mRNA transcript after translation is complete?
ALL QUESTIONS IN THE BOXES MUST BE ANSWERED. Please include the questions
with your responses and upload the assignment to the appropriate assignment submission folder. |
Chicken pox is a highly contagious skin infection. It is caused due to virus called varicella-zoster of the herpes family virus
It mostly infects young children under 12, it may also infect to adults who has not exposed or vaccinated to the virus. It is quite common in every individual that once they are infected with chicken pox virus.
It is transmitted through the fluid from the blisters formed due to chicken pox or when the infected person coughs or sneezes
A person with chickenpox becomes contagious 1 to 2 days before their blisters appear. They remain contagious until all the blisters have dried.
The disease is usually mild, although serious complications sometimes occur. Adults and older children usually get sicker than younger children.
Children whose mothers have had chickenpox or have received the chickenpox vaccine are not very likely to catch it before they are 1 year old. If they do catch chickenpox, they often have mild cases. This is because antibodies from their mothers’ blood help protect them. Children under 1 year old whose mothers have not had chickenpox or the vaccine can get severe chickenpox.
Severe chickenpox symptoms are more common in children whose immune system does not work well because of an illness or medicines such as chemotherapy and steroids.
Symptoms of Chicken Pox
Most children with chickenpox have the following symptoms before the rash appears:
- Stomach ache
- Chickenpox rash occurs about 10 to 21 days after the exposure to the virus. The average child develops 250 to 500 small, itchy, fluid-filled blisters over red spots on the skin.
- The blisters are usually first seen on the face, middle of the body, or scalp.
- After a day or two, the blisters become cloudy and then scab. Meanwhile, new blisters form in groups. They often appear in the mouth, in the vagina, and on the eyelids.
- Children with skin problems, such as eczema, may get thousands of blisters.
Exams and Tests for Chicken Pox
Diagnosed usually by looking at the rash and asking questions about the person’s medical history. Small blisters on the scalp usually confirm the diagnosis.
Chicken Pox Treatment
- Avoid scratching or rubbing the itchy areas. Keep fingernails short to avoid damaging the skin from scratching.
- Wear cool, light, loose bedclothes. Avoid wearing rough clothing, particularly wool, over an itchy area.
- Take lukewarm baths using little soap and rinse thoroughly.
- Apply a soothing moisturizer after bathing to soften and cool the skin.
- Avoid prolonged exposure to excessive heat and humidity.
- Use skin care medicated creams as advised by your doctor.
- The individual should be isolated from the rest of children so as to not contract to other children.
Complications of Chicken Pox
Rarely, serious infections such as encephalitis have occurred. Other complications may include:
- Reye’s syndrome
- Transient arthritis
Chicken Pox Prevention
Because chickenpox is airborne and very contagious before the rash even appears, it is difficult to avoid.
A vaccine to prevent chickenpox is part of a child’s routine immunization schedule. The vaccine usually prevents the chickenpox disease completely or makes the illness very mild.
Talk to your doctor if you think your child might be at high risk for complications and might have been exposed. Immediate preventive measures may be important. Giving the vaccine early after exposure may still reduce the severity of the disease.
The above are the details about Chicken Pox Infection and Treatment |
Nagoya delegates need to plan how the world achieves food security, before ecosystems reach critical tipping points.
This piece originally appeared on the Guardian website.
Governments from around the world will arrive in Nagoya, Japan next week for the high level ministerial segment of the Convention on Biological Diversity (CBD) meeting. Their task is daunting. Even the modest target set in 2002 of reducing the rate of biodiversity loss by 2010 has proved beyond reach using current strategies. But rather than wringing their hands over the tide of species losses that has swept the planet, delegates should turn their attention to the root cause of the problem: the ways in which we meet our need for food.
What does food supply have to do with conserving species? Everything. It is a leading factor in the five principal pressures causing biodiversity loss: habitat change, overexploitation, invasive species, pollution, and climate change (see Box).
Food Production: Key Culprit in Biodiversity Loss
Habitat conversion: Approximately 43% of tropical and subtropical forests and 45% of temperate forests have been converted to croplands.
Over exploitation: 70% of global freshwater use is by agriculture.
Invasive species: The introduction of aquatic alien fish species has led to the extinction of native species in many parts of the world.
Pollution: Only a fraction of nitrogen applied as a fertilizer is typically used by plants, the rest ends up in inland waters and coastal systems, creating eutrophication and dead zones.
Climate Change: Agriculture directly contributed to around 14% of global greenhouse gas emissions in 2005 and drives additional emissions through its role in deforestation.
Ironically, while producing food relies on harvesting nature’s bounty, food production often degrades the very ecosystems it depends on. The Brazilian Amazon, for example, provides critical water and climate regulation services that the region's agricultural sector depends upon for its survival. Yet one fifth of the Brazilian Amazon has been deforested, primarily by farmers, and ranchers.
The Policymaker's Paradox
Delegates at the conference face a paradox. Dramatic increases in food production over the past 50 years have supported significant improvements in human wellbeing, but at the same time have diminished Earth's diversity and capacity to provide ecosystem services (including fish, food, freshwater, pollination, and water regulation). Scientists worry that this results from a time lag between the degradation of ecosystems and the resulting effects on human well-being.
The Amazon, for example, could reach a tipping point due to deforestation beyond which it experiences widespread dieback and transitions into savanna-like vegetation. The reductions in rainfall would devastate efforts to raise crops and cattle in the region.
Upping the challenge, population growth and rising per capita incomes are expected to double the demand for food in the next 40 years, according to the UN's food and agriculture chief, Jacques Diouf. To devise a successful new strategy to preserve the diversity of life on Earth, the CBD needs to take a quantum leap in its partnership with food producers, to change how the world achieves food security, before ecosystems reach critical tipping points in the face of ever growing demands for food and climate change.
Implications for the 2020 Global Biodiversity Strategy
The new 2020 global biodiversity strategy under discussion at Nagoya must focus first and foremost on reducing the pressure of food production on biodiversity and ecosystems. This will also help maintain the resilience of ecosystems and prevent dangerous tipping points from occurring. To achieve this food security experts need to work alongside agriculturists and biologists to maximize use of existing land for food and minimize further ecosystem loss. Three key strategies can help meet this goal:
Restore degraded lands
Globally, over one billion hectares of land is believed to have restoration potential. Restoring even a small part of this for food production would help reduce pressure on natural ecosystems. In Indonesia, for example, the World Resources Institute is seeking to develop a scalable model for diverting new oil palm plantations that would otherwise replace virgin forests on to degraded land. Similar opportunities exist to divert the expansion of cattle ranches from the Amazon’s forests to degraded lands.
Increase productivity on existing farmland
While intensification doesn't immediately come to mind when thinking about conservation, it is nevertheless a key strategy to reduce stress on natural ecosystems. The challenge is to find ways to get more food out of land without the unwanted consequences such as ecosystem services trade-offs that have dogged current intensive production systems. We need to deploy proven technologies that use ecosystem services much more efficiently such as new varieties of seeds, drip irrigation, integrated pest management and conservation agriculture. At the same time, we must make major investments in further innovation and a new generation of technologies.
Manage demand for food
Opportunities for managing demand for food include promoting the use of vegetable protein over meat, reducing food waste - estimated to be around 40% of food produced in the United States - and advancing certification programs and other types of incentives for sustainable food production. Fairtrade is paying Afghan farmers, for example, almost double the going rate for providing raisins that meet environmental criteria such as the sustainable use of water – and making a viable business of it.
The proposed 2011-2020 strategic plan that ministers will be discussing in Nagoya does include some targets to address the destructive impacts of food production such as reducing pollution from nutrient runoff and promoting sustainable farm management. But a much greater and more holistic effort is needed. Too much of the strategy takes a “remove-the-impacts” approach, a sure recipe for repeating the disappointment of not meeting the 2010 targets to reduce biodiversity loss.
If, by 2050, the world celebrates success in providing food security and in navigating ecological tipping points, it will be because of the ingenuity of farmers and conservationists, agricultural experts and ecologists in finding ways of learning and acting together. COP10 delegates can play a role in stimulating that action by ensuring the new 2020 targets tackle the hungry elephant in the conservation room - how to double food production while protecting ecosystems.
Frances Irwin is a former Fellow in the Institutions and Governance Program at the World Resources Institute. |
Print out this pretty Year of the Pig jigsaw for the kids to enjoy! All you have to do is slice along the vertical lines. Then the kids can use the numbers to help them put it back together.
Year Of The Pig Counting Jigsaw
Learn ordinal numbers with the help of this lovely Year of the Pig jigsaw. Choose from colour or black and white. If choosing the black and white version, ask the kids to colour it in first. Now slice up the jigsaw, including the numbers.
Here's a fun activity to keep the kids busy and test counting skills. Print out the colour version of our pig counting jigsaw onto some card, or colour-in our black and white version. Cut into strips and ask the children to complete the jigsaw using the numbers 1-10 along the bottom. |
In this post, you will learn about K-Means clustering concepts with the help of fitting a K-Means model using Python Sklearn KMeans clustering implementation. Before getting into details, let’s briefly understand the concept of clustering.
Clustering represents a set of unsupervised machine learning algorithms belonging to different categories such as prototype-based clustering, hierarchical clustering, density-based clustering etc. K-means is one of the most popular clustering algorithm belong to prototype-based clustering category. The idea is to create K clusters of data where data in each of the K clusters have greater similarity with other data in the same cluster. The different clustering algorithms sets out rules based on how the data needs to be clustered together. Here is a diagram representing creation of clusters using K-means algorithms.
In the above diagram, pay attention to some of the following:
- There are three different clusters represented by green, orange and blue color.
- Each cluster is created around a central point called as cluster centroid or cluster center.
The following topics is covered in this post:
- What is K-Means Clustering?
- K-Means clustering Python example
What is K-Means Clustering?
K-means clustering algorithm partitions data into K clusters (and, hence, K-means name). K-means algorithm belongs to the category, prototype-based clustering. Prototype-based clustering algorithms are based on one of the following:
- Centroid-based clusters: Each cluster built around a point which is termed as the centroid (average) of similar points with continuous features. K-means algorithm results in creation of centroid-based clusters.
- Medoid-based clusters: Each cluster built around a point which is termed as the medoid which represents the point that minimises the distance to all other points that belong to a particular cluster, in the case of categorical features.
Here are some of the points covered in relation to K-means clustering:
- What are key steps of K-means clustering algorithm?
- What is the objective function in K-means which get optimised?
- What are the key features of K-means clustering algorithm?
- How to find the most optimal value of K?
What are key steps of K-Means clustering algorithm?
The following represents the key steps of K-means clustering algorithm:
- Define number of clusters, K, which need to be found out. Randomly select K cluster data points (cluster centers) or cluster centroids. The goal is to optimise the position of the K centroids.
- For each observation, find out the Euclidean distance between the observation and all the K cluster centers. Of all distances, find the nearest distance between the observation and one of the K cluster centroids (cluster centers) and assign the observation to that cluster.
- Move the K-centroids to the center of the points assigned to it.
- Repeat the above two steps until there is no change in the cluster centroids or maximum number of iterations or user-defined tolerance is reached.
What is the objective function in K-means which get optimized?
K-means clustering algorithm is an optimization problem where the goal is to minimise the within-cluster sum of squared errors (SSE). At times, SSE is also termed as cluster inertia. SSE is the sum of the squared differences between each observation and the cluster centroid. At each stage of cluster analysis the total SSE is minimised with SSEtotal = SSE1 + SSE2 + SSE3 + SSE4 …. + SSEn.
The below represents the objective function which needs to be minimized:
What are key features of K-means algorithm?
The following are some of the key features of K-means clustering algorithm:
- One needs to define the number of clusters (K) beforehand. This is unlike other clustering algorithms related to hierarchical clustering or density-based clustering algorithms. The need to define the number of clusters, K, a priori can be considered as a disadvantage because for the real-world applications, it may not always be evident as to how many clusters can the data be partitioned into.
- K-means clusters do not overlap and are not hierarchical.
How to find most optimal value of K?
The technique used to find the most optimal value of K is draw a reduction in variation vs number of clusters (K) plot. Alternatively, one could draw the squared sum of error (SSE) vs number of clusters (K) plot. Here is the diagram representing the plot of SSE vs K (no. of clusters). In the diagram below, the point representing the optimal number of clusters can also be called as elbow point. The elbow point can be seen as the point after which the distortion/cluster inertia/SSE start decreasing in a linear fashion.
K-Means Clustering Python Example
In this section, we will see how to create K-Means clusters using Sklearn IRIS dataset.
import matplotlib.pyplot as plt from sklearn import datasets from sklearn.cluster import KMeans # # Load Sklearn IRIS dataset # iris = datasets.load_iris() X = iris.data y = iris.target # # Do the scatter plot and see that clusters are evident # plt.scatter(X[:,1], X[:,3], color='white', marker='o', edgecolor='red', s=50) plt.grid() plt.tight_layout() plt.show()
Here is how the plot would look like:
Now, lets fit a K-Means cluster model. Pay attention to some of the following in relation to instantiation of K-means:
- Number of clusters defined upfront via n_clusters = 2
- init (default as k-means++): Represents method for initialisation. The default value of k-means++ represents the selection of the initial cluster centers (centroids) in a smart manner (place the initial centroids far away from each other ) to speed up the convergence. The other values of init can be random, which represents the selection of n_clusters observations at random from data for the initial centroids.
- n_init (default as 10): Represents the number of time the k-means algorithm will be run independently, with different random centroids in order to choose the final model as the one with the lowest SSE.
- max_iter (default as 300): Represents the maximum number of iterations for each run. The iteration stops after the maximum number of iterations is reached even if the convergence criterion is not satisfied. This number must be between 1 and 999. In this paper (Scalable K-Means by ranked retrieval), the authors stated that K-means converges after 20-50 iterations in all practical situations, even on high dimensional datasets as they tested.
- tol (default as 1e-04): Tolerance value is used to check if the error is greater than the tolerance value. For error greater than tolerance value, K-means algorithm is run until the error falls below the tolerance value which implies that the algorithm has converged.
# # Create an instance of K-Means # kmc = KMeans(n_clusters=3, init='random', n_init=10, max_iter=300,tol=1e-04, random_state=0) # # Fit and make predictions # y_kmc = kmc.fit_predict(X) # # Create the K-means cluster plot # plt.scatter(X[y_kmc == 0, 1], X[y_kmc == 0, 3], s=50, c='lightgreen', marker='s', edgecolor='black', label='Cluster 1') plt.scatter(X[y_kmc == 1, 1], X[y_kmc == 1, 3], s=50, c='orange', marker='o', edgecolor='black', label='Cluster 2') plt.scatter(X[y_kmc == 2, 1], X[y_kmc == 2, 3], s=50, c='blue', marker='P', edgecolor='black', label='Cluster 3') plt.scatter(kmc.cluster_centers_[:, 1], kmc.cluster_centers_[:, 3], s=250, marker='*', c='red', edgecolor='black', label='Centroids') plt.legend(scatterpoints=1) plt.grid() plt.tight_layout() plt.show()
Here is how the K-means clusters will look like drawn using Matplotlib.pyplot. Pay attention to some of the following in the plot given below:
- There are three clusters represented using green, orange and blue colors.
- Red stars represent Centroids and three clusters created are centered around these star points.
Here is a great tutorial video on K-means clustering by StatQuest youtube channel.
Here is the summary of what you learned about K-Means clustering in general:
- There are different categories of clustering such as prototype-based clustering, hierarchical clustering, density-based clustering
- K-means clustering belongs to prototype-based clustering
- K-means clustering algorithm results in creation of clusters around centroid (average) of similar points with continuous features.
- K-means is part of sklearn.cluster package.
- K-means requires that one defines the number of clusters (K) beforehand.
- K-means clusters do not overlap and are not hierarchical.
- The objective function of the K-means is within-cluster sum of squared errors (SSE). SSE is squared sum of different between each observation and the cluster centroid.
- The optimal number of clusters, K, can be found by drawing sum of squared errors vs number of clusters point. |
The U.S. Fish and Wildlife Service removed the eastern cougar subspecies, scientifically known as Puma concolor cougar, from the endangered species list and officially declared it extinct in the U.S. on January 22. The eastern cougar, also referred to as eastern puma, had not been seen for 80 years.
This animal had been seen roaming around in the forests, mountains, and grasslands east of Mississippi before. However, it has not been sighted in the last eight decades. It is theorized that the last eastern cougar was shot by hunters in Maine in 1938.
The status of the eastern cougar was reviewed in 2011. In 2015, the U.S. Fish and Wildlife Service found that there was no evidence, such as photos and DNA, of the existence of this subspecies, so they declared it extinct, and its delisting from the U.S. Fish and Wildlife endangered species will be officially effective on February 22, according to Newsweek.
These big cats are described as mysterious. They often travel alone, usually at night. Although they were difficult to find they are still susceptible to human hunting. It is thought that hunting and trapping have triggered the extinction of the eastern cougar.
The big cats were perceived as a threat to people, livestock, and pets. They are often blamed for annihilating livestock. As a result, people hunted and killed them as pest control, particularly in the 1700s and 1800s. They were also trapped and killed for their fur in the 1800s.
However, experts say that the eastern cougars play a role in the ecosystem. Michael Robinson, a conservation advocate at the Center for Biological Diversity, said that large carnivores such as cougars could keep wild food web healthy. He further said that these cougars would curb deer overpopulation and tick-borne diseases that threaten human health.
This means that they could reduce the population of ticks by hunting and killing deer. Additionally, they could also save lives by lessening deer-car collisions. It is projected that if these cougars were reintroduced in the U.S, deer-car collisions could be reduced by 22 percent. This could save 115 lives and prevent more than 21,000 accidents, according to IFL Science. |
Math Werkz Area and Perimeter 1 is a fun, interactive courseware that teaches kids about area and perimeter.
It includes word problems and formulae which can serve as a guide in teaching the kids on how to calculate the area and perimeter of squares, rectangles, and other shapes. Among the activities included are counting and naming boundary lines, and finding the perimeter of given figures and squares. Teachers can also use this as additional exercises or classroom-based activities for their students to work on.
Math Werkz Area and Perimeter 1 is one of four Math Werkz Area and Perimeter interactive eBooks.
Math Werkz Series is a Primary Mathematics Teacher Resource consisting of 90+ activity-based worksheets. Aligned with the Common Core State Standards Initiative (CCSI) for third-graders, these worksheets have teacher-enabled tools that can be used to teach and engage students using any available devices like tablets, iPads, smartphones, or classroom devices like whiteboards, SMART Boards or PCs.
Overall, there is a total of 98 Math Werkz interactive eBooks available with topics that range from Factors & Multiples, Fraction, Money, Time, Division, Geometry, Measurement, Graphs and Charts, Multiplication, Percentage, Decimal, Area and Perimeter. Assessment Book, Addition, Subtraction and Number System. |
Algebraic linear equations are mathematical functions that, when graphed on a Cartesian coordinate plane, produce x and y values in the pattern of a straight line. The standard form of the linear equation can be derived from the graph or from given values. Linear equations are fundamental to algebra, and thus fundamental to all higher mathematics.
Factor negative signs into the linear equation carefully. If b = -8 and m = 5, the algebraic linear equation would be written y = 5x + (- 8), or simplified, y = 5x - 8.
When in doubt, check your work.
Note that the standard form of a linear equation is:
y = mx + b
Where m = slope and b = y-intercept.
Calculate the slope of the line. The slope can be found by selecting two points on the line, determining the vertical rise and the horizontal run between the points and dividing them. For example, if (3,4) and (5,6) are on the line, the slope between them would be (5 - 3) / (6 - 4), simplified to (2) / (2), simplified to 1. Include negative values, since slopes can be positive or negative.
Determine or calculate the y-intercept of the line. The y-intercept is the y-coordinate of the point where the line passes through the y-axis of the coordinate plane. For example, if the point of intersection with the y-axis is (0,5), the y-intercept would be 5. The y-intercept can be found by physically locating it on the graph or by locating the given point on the line that has an x-coordinate of 0. That point is the point of intersection. The y-intercept will be positive if it intersects the y-axis above the x-axis or negative if it intersects below the x-axis.
Write the equation y = mx + b, substituting the values for m and b you calculated or determined. The m will be your slope, and the b will be your y-intercept. Leave the y and x variables in the equation as letter variables. Include the sign of the numbers you plug in. For example, if I discovered my slope to be -3 and my y-intercept to be 5, my linear equation would be y = -3x + 5. The linear equation is complete and correctly written when the (m) and (b) are properly incorporated into the equation.
- Ryan McVay/Photodisc/Getty Images |
Marooned on a desert island with a limited food supply, children investigate the properties and make-up of common foods. Investigations of water content, fat content and the roles of starch and gluten in flour all contribute to the overall question of what constitutes a "balanced" diet. Instructional material.
Explore It! is a series of exploratory science experiences for out-of-school programs and elementary school students (ages 8-12), funded by the National Science Foundation and developed by Education Development Center, Inc. Explore It! includes a guide for each project and a special guide for implementing the program. These extended explorations provide an experiential foundation for science concepts associated with basic phenomena and helps develops essential skills for inquiry in the formal context.
Instructional materials only. |
One of the most common definitions of pain comes from the International Association for the Study of Pain (IASP), which describes pain as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.” When we are exposed to something that causes pain, if able, we quickly or reflexively withdraw. The sensory feeling of pain is called nociception.
In the absence of an objective measure, pain is a subjective individual experience. How a person responds to pain is related to his or her genetic features and cognitive, motivational, emotional, and psychological state. Pain response is also related to gender, experiences and memories of pain, cultural and social influences, and general health (Sessle, 2012).
Factors Influencing the Experience of Pain
Acute pain comes on quickly and can be severe, but lasts a relatively short time (IOM, 2011). It can be intensely uncomfortable. It usually has a well-defined location and an identifiable painful or noxious stimulus from an injury, brief disease process, surgical procedure, or dysfunction of muscle or viscera. Acute pain alerts us to possible injury, inflammation, or disease and can arise from somatic or visceral structures.
Acute pain is often successfully treated with patient education, mild pain medications, environmental changes, and stress reduction, physical therapy, chiropractic, massage therapy, acupuncture, or active movement programs. Acute pain is usually easier to treat than chronic pain.
The Institute of Medicine (IOM) has targeted improved treatment of acute pain as an area of significant healthcare savings. The IOM states that better treatment of acute pain, through education about self-management and better clinical treatment, can avoid its progression to chronic pain, which is more difficult and more expensive to treat (IOM, 2011).
Chronic pain generally refers to pain that exists for three or more months and does not resolve with treatment. The three-month time frame is not absolute and some conditions may become chronic in as little as a month.
Chronic pain is common; it affects 1 in 5 adults, is more prevalent among women and elders, and is associated with physically demanding work and lower level of education (King & Fraser, 2013). Chronic pain is a silent epidemic that reduces quality of life, negatively impacts relationships and jobs, and increases rates of depression (Sessle, 2012).
Chronic pain is a symptom of many diseases. Up to 70% of cancer patients suffer from chronic pain and, among individuals living with HIV/AIDS, pain has been reported at all stages of infection (Lohman et al., 2010).
Chronic pain is also costly. A 2011 IOM report places this cost at more than $500 billion per year in the United States, creating an economic burden that is higher than the healthcare costs for heart disease, cancer, and diabetes combined. These economic costs stem from the cost of healthcare services, insurance, welfare benefits, lost productivity, and lost tax revenues (Sessle, 2012).
Chronic pain is a multidimensional process that must be considered as a chronic degenerative disease not only affecting sensory and emotional processing, but also producing an altered brain state (Borsook et al., 2007). Chronic pain persists over time and is resistant to treatments and medications that may be effective in the treatment of acute pain.
Aspects of Chronic Pain
When pain becomes chronic, sensory pathways continue to transmit the sensation of pain even though the underlying condition or injury that originally caused the pain has healed. In such situations, the pain itself may need to be managed separately from the underlying condition. Other aspects of chronic pain include:
- Chronic pain may express itself as a consequence of other conditions. For example, chronic pain may arise after the onset of depression, even in patients without a prior pain history.
- Chronic pain patients are often defined as “difficult patients” in that they often have neuropsychologic changes that include changes in affect and motivation or changes in cognition, all of which rarely predate their pain condition.
- In some conditions such as complex regional pain syndrome (CRPS), manifestations of dysautonomia, movement disorders, and spreading pain (ipsilateral and contralateral) are all indicative of complex secondary changes in the CNS that can follow a relatively trivial peripheral nerve injury.
- Chronic opioid therapy results in a hyperalgesic state (an increased sensitivity to pain) in both experimental and clinical pain scenarios, implying changes in central processing.
- Opioids fail to produce pain relief in all individuals, even at high doses. This implies the development of “analgesic resistance,” a consequence of complex changes in neural systems in chronic pain that complicates the utility of opioids for long-term therapy. (Borsook et al., 2007)
Chronic pain can be difficult to distinguish from acute pain and, not surprisingly, clinicians have less success treating chronic pain than treating acute pain. Chronic pain does not resolve quickly and opioids or sedatives are often needed for treatment, which complicates the clinician-patient relationship. Because medical practitioners often approach chronic pain management from a medication perspective, other modalities are sometimes overlooked.
Chronic pain can affect every aspect of life. It is associated with reduced activity, impaired sleep, depression, and feelings of helplessness and hopelessness, and about one-fourth of people with chronic pain will experience physical, emotional, and social deterioration over time.
Acute pain can progress to chronic pain. Whether this occurs can depend on a number of factors, including the availability of treatment during the acute phase. Factors from birth, childhood, adolescence, and adulthood can also affect whether pain becomes chronic.
Lifespan Factors Affecting the Development of Chronic Pain
Musculoskeletal pain, especially joint and back pain, is the most common type of chronic pain (IOM, 2011). Although musculoskeletal pain may not correspond exactly to the area of injury, it is nevertheless commonly classified according to pain location. However, most people with chronic pain have pain at multiple sites (Lillie et al., 2013).
Describing Chronic Pain According to Pathophysiology
When chronic pain is classified according to pathophysiology, three types have been described by the International Association for the Study of Pain (IASP):
- Nociceptive pain: caused by stimulation of pain receptors
- Neuropathic pain: caused by damage to the peripheral or central nervous system
- Psychogenic pain: caused or exacerbated by psychiatric disorders
Nociceptive pain is caused by activation or sensitization of peripheral nociceptors in the skin, cornea, mucosa, muscles, joints, bladder, gut, digestive tract, and a variety of internal organs. Nociceptors differ from mechanoreceptors, which sense touch and pressure, in that they are responsible for signaling potential damage to the body. Nociceptors have a high threshold for activation and increase their output as the stimulus increases.
Nociceptors respond to physical stimulation or chemical stimulation. In the physical response, the free nerve endings of the nociceptor (cell A in the diagram below) will become deformed by a sufficiently strong and deep stimulus, and in response send a pain signal. An injury triggers a complex set of chemical reactions as damaged cells release certain chemicals while immune cells reacting to the damage release additional chemicals. The nociceptor is exposed to this chemical “soup,” and continues to send pain signals until the levels of these pain-generating chemicals are lowered over time as the wound heals. A very similar mechanism is at work with itching sensations (Wikiversity, 2015).
Schematic of several representative sensory pathways leading from the skin to brain. Source: Shigeru23, Wikimedia Commons.
Neuropathic pain is “pain arising as a direct consequence of a lesion or disease affecting the somatosensory system” (IASP, 2012). It is usually described as a poorly localized, electric shock-like, lancinating, shooting sensation originating from injury to a peripheral nerve, the spinal cord, or the brain. It can cause a sensation of burning, pins and needles, electricity, and numbness. Neuropathic pain can be associated with diabetic neuropathy, radiculopathy, post herpetic neuralgia, phantom limb pain, tumor-related nerve compression, neuroma, or spinal nerve compression.
Neuropathic pain is classified as central or peripheral. Central pain originates from damage to the brain or spinal cord. Peripheral pain originates from damage to the peripheral nerves or nerve plexuses, dorsal root ganglion, or nerve roots (IASP, 2012).
Neuropathic pain tends to be long-lasting and difficult to treat. Opioids can be effective, although non-opioid medications such as tricyclic antidepressants, SNRI antidepressants, and several anticonvulsant drugs are commonly used as first-line therapies. Use of SNRIs is common and frequently results in improvement, but treatment of neuropathic pain is considered an off-label use for these agents. Simple analgesics have not been shown to be effective for this type of pain (IASP, 2012).
Psychogenic pain is defined as pain that persists despite the lack of any identified underlying physical cause. Although still commonly used, the term psychogenic pain it is no longer considered an official diagnostic term. A more correct diagnostic term is persistent somatoform pain disorder (PSPD). PSPD is defined in the ICD-10 Version 2016 as
the predominant complaint is of persistent, severe, and distressing pain, which cannot be explained fully by a physiological process or a physical disorder, and which occurs in association with emotional conflict or psychosocial problems that are sufficient to allow the conclusion that they are the main causative influences. The result is usually a marked increase in support and attention, either personal or medical (ICD-10, Version 2016).
Persistent somatoform pain disorder patients suffer from persistent, severe, and distressing pain without sufficient explanatory pathology. It is believed the pain originates from emotional conflicts or psychosocial problems. This type of pain is usually nonresponsive to a variety of therapies because of its unclear pathology. The absence of effective treatment can result in excessive consumption of medical resources, in addition to social problems. Persistent somatoform pain disorder seriously impacts the quality of life of patients and brings a great burden to society (Huang et al, 2016).
The identification of PSPD as having no physiologic cause has done a great deal of damage to individuals with chronic pain. Many healthcare professionals fail to recognize the complexity of pain and believe it can be explained based on the presence or absence of physical findings, secondary gain, or prior emotional problems. As a result, countless individuals have been informed that “the pain is all in your head.” And if these same individuals react with anger and hurt, clinicians sometimes compound the problem by labeling the individual as hostile, demanding, or aggressive (VHA, 2015).
The correspondence between physical findings and pain complaints is fairly low (generally, 40%–60%). Individuals may have abnormal tests (eg, MRI shows a bulging disk) with no pain, or substantial pain with negative results. This is because chronic pain can develop in the absence of the gross skeletal changes we are able to detect with current technology (VHA, 2015).
Muscle strain and inflammation are common causes of chronic pain, yet may be extremely difficult to detect. Other conditions may be due to systemic problems, trauma to nerves, circulatory difficulties, or CNS dysfunction. Yet in each of these cases we may be unable to “see” the cause of the problem. Instead, we have to rely on the person’s report of their pain, coupled with behavioral observations and indirect medical data. This does not mean that the pain is psychogenic or has no underlying physical cause. Instead it means that we are unable to detect or understand its cause (VHA, 2015).
Actually, how often healthy individuals feign pain for secondary gain is unknowable. In addition, the presence of secondary gain does not at all indicate that an individual’s pain is less “real.” In this country most individuals with chronic pain receive at least some type of benefit (not necessarily monetary) for pain complaints. Therefore, exaggeration of pain or related problems is to be expected. Unfortunately, practitioners may use the presence of secondary gain or pain amplification as an indication that the person’s pain is not real (VHA, 2015).
Chronic Pain Syndromes
In deciding how to treat chronic pain, it is important to distinguish between chronic pain and a chronic pain syndrome. A chronic pain syndrome differs from chronic pain in that people with a syndrome over time develop a number of related life problems beyond the sensation of pain itself. It is important to distinguish between the two because they respond to different types of treatment (VHA, 2015).
Most individuals with chronic pain do not develop the more complicated and distressful chronic pain syndrome. Although they may experience the pain for the remainder of their lives, little change in their daily regimen of activities, family relationships, work, or other life components occurs. Many of these individuals never seek treatment for pain and those who do often require less intensive, single-modality interventions (VHA, 2015).
Symptoms of Chronic Pain Syndrome
Those who develop chronic pain syndrome tend to experience increasing physical, emotional, and social deterioration over time. They may abuse pain medications, and typically require more intensive, multimodal treatment to stop the cycle of increasing dysfunction (VHA, 2015).
Symptoms of Chronic Pain Syndrome
Loss of employment
Source: VHA, 2015.
Complex Regional Pain Syndrome
Complex regional pain syndrome (CRPS) is a general term for a severe chronic neuropathic pain condition that, in the past, was referred to by several other names. Causalgia, from the Greek meaning heat and pain, was the founding term for the syndrome. Causalgia was first used to represent the burning nature of the pain, as seen within American Civil War casualties suffering traumatic bullet wounds (Dutton & Littlejohn, 2015). In the 1940s the term reflex sympathetic dystrophy was introduced when it was thought the sympathetic nervous system played a role in the disease.
In 1994 a working group for the IASP held a conference to develop a more neutral term, to address the widespread inconsistency in terminology, and to avoid unsubstantiated theory on causation and etiology. From this meeting came the officially endorsed term complex regional pain syndrome, intended to be descriptive, general, and not imply etiology. The term was further divided into “CRPS 1” and “CRPS 2.” The current terminology represents a compromise and remains a work in progress. It will likely undergo modifications in the future as specific mechanisms of causation are better defined (Dutton & Littlejohn, 2015).
CRPS I is characterized by intractable pain that is out of proportion to the trauma, while CRPS II is characterized by unrelenting pain that occurs subsequent to a nerve injury. The criteria for diagnosing CRPS is difficult because of the vast spectrum of disease presentations. They can include:
- Intractable pain out of proportion to an injury
- Intense burning pain
- Pain from non-injurious stimulation
- An exaggerated feeling of pain
- Temperature changes in the affected body part
- Motor/trophic disturbances, or
- Changes in skin, hair, and nails; and abnormal skin color.
The pain in CRPS is regional, not in a specific nerve territory or dermatome, and it usually affects the hands or feet, with pain that is disproportionate in severity to any known trauma or underlying injury. It involves a variety of sensory and motor symptoms including swelling and edema, discoloration, joint stiffness, weakness, tremor, dystonia, sensory disturbances, abnormal patterns of sweating, and changes to the skin (O’Connell et al., 2013).
The acute phase of the condition is often characterized by edema and warmth and is thought to be supported by neurogenic inflammation. Alterations in CNS structure and function may be more important to the sustained pain and neurocognitive features of the chronic phase of the CRPS (Gallagher et al., 2013).
Source: Wikimedia Commons. Used by permission.
CRPS usually develops following trauma and is thought to involve both central and peripheral components. Continuous pain is the most devastating symptom of CRPS and has been reported to spread and worsen over time. Pain is usually disproportionate to the severity and duration of the inciting event (Alexander et al., 2013).
CRPS has high impact in terms of individual, healthcare, and economic burden, yet continues to lack a clear biologic explanation and predictable, effective treatment. Despite its sizable disease burden, its long history of identification, and a concentrated research effort, many significant challenges remain. These relate to issues of terminology, diagnostic criteria, predisposing factors, triggers, pathophysiology, and the ideal treatment path. There is also the challenge of the well-guarded notion that CRPS is not a true disease state, but rather a condition with psychological foundations (Dutton & Littlejohn, 2015).
While acute CRPS sometimes improves with early and aggressive physical therapy, CRPS present for a period of one year or more seldom spontaneously resolves. The syndrome encompasses a disparate collection of signs and symptoms involving the sensory, motor, and autonomic nervous systems, cognitive deficits, bone demineralization, skin growth changes, and vascular dysfunction (Gallagher et al., 2013).
Although numerous drugs and interventions have been tried in attempts to treat CRPS, relieve pain, and restore function, a cure remains elusive. Two analyses have attempted to develop evidence-based guidelines for the treatment of CRPS. One covers trials in the period between 1980 and June 2005; the second covers the period from June 2000 to February 2012. The earlier review identified the following treatments that had varying degrees of positive therapeutic effect:
- Sub-anesthetic ketamine intravenous infusion
- Dimethyl sulphoxide cream
- Oral corticosteroids
- Bisphosphonates (eg, alendronate)
- Spinal cord stimulation (in selected patients)
- Various physical therapy regimens (Inchiosa, 2013)
Positive findings to varying degrees in the more recent analysis were as follows:
- Low-dose ketamine infusions
- Oral tadalafil
- Intravenous regional block with a mixture of parecoxib, lidocaine, and clonidine
- Intravenous immunoglobulin
- Memantine 40 mg per day (with morphine)
- Physical therapy
Spinal cord stimulation and transcranial magnetic stimulation improved symptoms, but only transiently (Inchiosa, 2013).
In the past it was common to explain the etiology of complex regional pain syndrome using the psychogenic model. Now however, neurocognitive deficits, neuroanatomic abnormalities, and distortions in cognitive mapping are known to be features of CRPS pathology. More important, many people who have developed CRPS have no history of mental illness. With increased education about CRPS through a biopsychosocial perspective, both physicians and mental health practitioners can better diagnose, treat, and manage CRPS symptomatology (Hill et al., 2012). |
CT-Scan (Computed Tomography Scan) Overview
How CT scans Work
The fundamentals of CT are similar to X-rays. X-ray radiation passes through the body producing an image on the opposite side. However, the CT-scan uses a series of radiographic images around a single axis of rotation. Then a computer recreates a cross section of the organ, bone, or tissue being studied.
Medical Uses of CT Scans
CT scans are performed to get a more complete picture of a particular organ or region. Commonly they are used to examine the chest, abdomen, pelvis, and brain. CT heart scans are also common. They are effective at removing the layering that can obscure X-ray images, but also can be used rapidly with little preparation. CT scans are useful because they are a relatively quick and painless procedure. They are often ideal for quickly diagnosing trauma for emergency surgery.
Potential Hazards of Using CT Scans
There is a very low risk of cancer from ionizing radiation used in CT scans. CT scans use more radiation than basic X-rays, though still very low levels. The increased risk of cancer is more than off-set by the medical benefits such as early diagnosis and less invasive surgery. Of course patients who receive a large number of scans would have a higher risk, and doctors should minimize the use of this technique if not medically required. As with X-ray tests, CT scans are more dangerous to very young children and in pre-natal situations.
Crown Valley Imaging located in Orange County has two imaging centers in Mission Viejo and Newport Beach where you can have CT exams performed by qualified technologists at the Mission Viejo office. Images are read and interpreted by Board Certified Neuroradiologists and Board Certified Musculoskeletal Radiologists and reports are generated within 24-48 business hours. |
What is Raynaud's phenomenon?
Raynaud's phenomenon or, simply, Raynaud's, is a disorder characterized by decreased blood flow to the fingers, and less frequently to the ears, toes, nipples, knees, or nose. Vascular spasms usually occur as attacks in response to cold exposure, stress, or emotional upset.
Raynaud's can occur alone (primary form) or may occur with other diseases (secondary form). The diseases most frequently associated with Raynaud's are autoimmune or connective tissue diseases, among others, such as:
Systemic lupus erythematous (lupus)
CREST syndrome (a form of scleroderma involving calcium skin deposits, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasias)
Occlusive vascular disease, such as atherosclerosis
Blood disorders, such as Cryoglobulinemia
What causes Raynaud's phenomenon?
The exact cause of Raynaud's is unknown. One theory links blood disorders characterized by increased platelets or red blood cells that may increase the blood thickness. Another theory involves the special receptors in the blood that control the constriction of the blood vessels that are shown to be more sensitive in individuals with Raynaud's.
What are the risk factors for Raynaud's phenomenon?
There are certain diseases or lifestyle choices that can increase a person's risk for developing Raynaud's. These risk factors include:
Existing connective tissue or autoimmune disease
Repetitive actions, such as typing or use of tools that vibrate (like a jack hammer)
Injury or trauma
Side effects from certain medications
What are the symptoms of Raynaud's phenomenon?
The following are the most common symptoms of Raynaud's phenomenon. However, each individual may experience symptoms differently. Symptoms may include:
A pattern of color changes in the fingers as follows: pale or white followed by blue then red when the hands are warmed; color changes are usually preceded by exposure to cold or emotional upset
Hands may become swollen and painful when warmed
Ulcerations of the finger pads develop (in severe cases)
Gangrene may develop in the fingers and, in rare cases, lead to infection or amputation.
How is Raynaud's phenomenon diagnosed?
There are no specific laboratory tests that can confirm a diagnosis of Raynaud's phenomenon. Instead, diagnosis is usually based on reported symptoms. Your doctor may perform a cold challenge test to bring out color changes in the hands or a nailfold capillaroscopy where your fingernail is examined under a microscope.
Tests to determine which form—primary or secondary—of Raynaud's phenomenon a patient may have include a medical exam, blood tests, and a complete medical history.
What is the treatment for Raynaud's phenomenon?
Specific treatment for Raynaud's phenomenon will be determined by your doctor based on:
Your age, overall health, and medical history
Extent of the disease
Your tolerance for specific medications, procedures, and therapies
Expectation for the course of the disease
Your opinion or preference
Although there is no cure for Raynaud's phenomenon, the disorder can often be successfully managed with proper treatment. Treatment may include:
Preventive measures, such as avoiding cold exposure and wearing extra layers to keep warm, including warm gloves, socks, scarf, and a hat
Wearing finger guards over ulcerated fingers
Avoiding trauma or vibration to the hand (such as vibrating tools)
Medications that are usually used to treat high blood pressure (antihypertensive medications) may be given during the winter months (to help reduce constriction of the blood vessels)
Individuals who first experience Raynaud's phenomenon after ages 35 to 40 may be tested for an underlying disease. The primary form of Raynaud's is the most common type, and usually begins between ages 15 and 25. It's less severe, and few people with this form develop another related condition. |
Prime factorization of any given number is to breakdown the number into its factors until all of its factors are prime numbers. This can be achieved by dividing the given number from smallest prime number and continue it until all its factors are prime.
Example: Prime factorization of number 1729
You should see if the number is divisible by smallest prime numbers, since 1729 isn't divisible by 2, move to next smallest prime number, that is 3, but the remainder is still non zero, next prime numbers are 5,7 and so on, dividing by 7 gives a zero remainder.
1729 = 7*247, further check if 247 is a prime or not , if its not prime, continue the same steps of prime factorization for 247.
1729 = 7*247 = 7*13*19, Notice that all the factors for the 1729 at this point are prime and no further factors are possible except 1, hence further factorization should be stopped. |
Black children are falling behind in Math between the grades of 3rd to 5th
What is happening to our children?
They lose confidence in themselves
They start to believe they aren’t good at Math.
This leads to less meaningful Math later in school.
As a result, they miss out on Advanced Math classes and opportunities.
And it doesn’t end there, in college, they aren’t eligible for certain majors.
This means they miss out on important professional and career opportunities.
THIS ALL FEEDS THE WEALTH GAP
Because we are underrepresented in high-paying jobs that depend on strong Mathematical skills, the gap has been our communities and others increases...
How to understand this chart?
- The dotted line represents the Black population by percentage in the United States.
- The colored lines represent the percentage of degrees Black people earn in STEM fields.
- Notice we are underrepresented in every field.
- This feeds the wealth gap.
Black Math Genius Assessment
Based on a cohort of students from CA who participated in the Black Math Genius program, students experienced an 84% increase in their understanding of coding, a 22% increase in their enjoyment of math, and a 20% in their confidence levels relating to their abilities to complete the activities in the program.
Black Math Genius?
With the Black Math Genius Course, we teach your child and family Advanced Mathematical concepts, the true origins of Mathematics, and programming for the future. Did you know that the concept behind telling time – Modular Arithmetic – is the basis for cryptography? Cryptography is a high-level of Mathematics that can be taught to young students if they have a solid foundation. We’re still graduating high school students that don’t know their multiplication tables or how to convert between decimals and fractions. That’s criminal!
Once your child completes this course they will have covered the following:
- Exponents, Roots, Logarithms
- Egyptian Mathematics
- Black Contributions to Mathematics
- Philosophies and Principles from KMT
- Python Programming
- Counting in Five Languages
- Algebraic Properties of Addition and Multiplication
- Area and Perimeter
- Solving Equations
- Physics Concepts
- Modular Arithmetic
- Introduction to Calculus
This curriculum is designed for students between the ages of 5 and 12 years old (but we believe even 3-year-olds would benefit with your guidance). Our children don’t have a learning gap. The adults have a belief gap – they don’t believe in our children the way that they believe in White children or Asian children doing Mathematics. It’s EXPECTED that Asian children will be “good” at Mathematics. Black people gave the world Mathematics – our children stand on the shoulders of giants. If they are given the same exposure, they will be great at STEM fields as well, specifically Mathematics. The Black Math Genius Course does that.
Does Black Math Genius have a Tutoring Service?
Sometimes our children are struggling in specific subjects or concepts. We have a solution for them as well that need more specific help.
We have introduced our Black Math Genius Plus Subscription service.
Black Math Genius PLUS?
Black Math Genius PLUS is a LIVE weekly tutoring program aimed to meet your children where they are in Math. Each week, your children will have online access to multiple live group Math tutoring sessions led by our gifted instructors. There will be two sessions per week, per group. As of today, we have 4 different groups; 3rd-5th Grades, 6th-8th Grades, Algebra 1, and High School Math group.
Our tutoring services are unique in that we:
- Focus on students having deep conceptual understanding of content
- Our tutors are extremely patient with students learning and thinking math
- We aim to build students’ mathematical confidence
Official Fall Schedule
2023-2024 SCHOOL YEAR
This Schedule is Presented in EST
Session Duration: 1 hour
What people are saying!
They were great!
My son is 13 and we are in the process of deschooling. I asked him to try the sessions one time and he asked to attend each week. He said that if he had a teacher like
Ms. Assata, he would have
liked school more.
It was phenomenally enriching.
My daughter felt encouraged and this has been the highlight of her Sankofa membership. She even logged back in last week hoping they would start another session.
Ms. Moore was EXTREMELY patient, respectful, and knowledgeable.
We used the time as family time and enjoyed it. It was nice to see other children from different places and see the “get it”, understand what she was teaching. It was nice that Assata was well versed and able to handle a wide range of topics, including sparking interest in NFTs.
Marsha Barrow Smith
Exceptional. Patient and helpful. Showed love and respect
for each child.
My youngest son with dyslexia says she really knows her stuff and helped him with the problem and broke it down, and made it simple without struggle. He says her behavior was good. Didn’t yell at kids if they got it wrong. But she kept the same nice personality and even when the time was up, she kept her heart open to continue to help us. Thank you! |
The thought of solving theorems or postulates leaves some students quivering in their boots. . . but not anymore! This must-have guide takes the pain out of learning geometry once and for all. The author demonstrates how solving geometric problems amounts to fitting parts together to solve interesting puzzles. Students discover relationships that exist between parallel and perpendicular lines; analyze the characteristics of distinct shapes such as circles, quadrilaterals, and triangles; and learn how geometric principles can solve real-world problems.
Like all titles in Barron's Painless Series, this book presents informal, student-friendly approaches to learning geometry, emphasizing interesting details, outlining potential pitfalls step by step, offering "Brain Tickler" quizzes, and more. |
Many amateurs use directional antennas because they are said to have “gain.” When this term is used, what it means is that a directional antenna will output more power in a particular direction than an antenna that is not directional. This only makes sense; You can’t get more power out of an antenna than you put in. Assuming each is driven by the same amount of power, the total amount of radiation emitted by a directional gain antenna compared with the total amount of radiation emitted from an isotropic antenna is the same. (E9B07)
To evaluate the performance of directional antennas, manufacturers will measure the field strength at various points in a circle around the antenna and plot those field strengths, creating a chart called the antenna radiation pattern. Figure E9-1 is a typical antenna radiation pattern.
The antenna radiation pattern shows the relative strength of the signal generated by an antenna in its “far field.” The far-field of an antenna is the region where the shape of the antenna pattern is independent of distance. (E9B12)
From the antenna radiation pattern, we can tell a bunch of things about the antenna. One of them is beamwidth. Beamwidth is a measure of the width of the main lobe of the radiation pattern. To determine the approximate beamwidth in a given plane of a directional antenna, note the two points where the signal strength of the antenna is 3 dB less than maximum and compute the angular difference. (E9B08) In the antenna radiation pattern shown in Figure E9-1, 50 degrees is the 3-dB beamwidth. (E9B01)
Another parameter that’s important for a directional antenna is the front-to-back ratio. In a sense, this is a measure of how directional an antenna really is. The higher this ratio, the more directional the antenna. In the antenna radiation pattern shown in Figure E9-1, 18 dB is the front-to-back ratio. (E9B02)
A similar parameter is the front-to-side ratio. In the antenna radiation pattern shown in Figure E9-1, the front-to-side ratio is 14 dB. (E9B03)
When reviewing an antenna radiation pattern, you need to remember that the field strength measurements were taken at a particular frequency. When a directional antenna is operated at different frequencies within the band for which it was designed, the gain may change depending on frequency. (E9B04)
Many different design factors affect these antenna parameters. For example, if the boom of a Yagi antenna is lengthened and the elements are properly retuned, what usually occurs is that the gain increases. (E9B06) Gain isn’t everything, however. What usually occurs if a Yagi antenna is designed solely for maximum forward gain is that the front-to-back ratio decreases. (E9B05)
To help design antennas, many amateurs use antenna modeling programs. All of these choices are correct when talking about the information obtained by submitting the details of a proposed new antenna to a modeling program (E9B14):
- SWR vs. frequency charts
- Polar plots of the far-field elevation and azimuth patterns
- Antenna gain
The type of computer program technique commonly used for modeling antennas is method of moments. (E9B09) The principle behind a method of moments analysis is that a wire is modeled as a series of segments, each having a uniform value of current. (E9B10)
The more segments your simulation uses, the more accurate the results. The problem with using too many segments, though, is that the program will take a very long time to run. You don’t want to use too few segments, though. A disadvantage of decreasing the number of wire segments in an antenna model below the guideline of 10 segments per half-wavelength is that the computed feed point impedance may be incorrect. (E9B11)
The abbreviation NEC stands for Numerical Electromagnetics Code when applied to antenna modeling programs. (E9B13) This is different from the more common definition of NEC, which is the National Electrical Code. |
Upon its dedication in 1885, the Washington Monument was the tallest structure in the world. Begun in 1848 to honor George Washington, the structure wasn't completed for over 36 years. Construction and financing problems slowed progress and the Civil War halted it completely.
In 1876, after construction had resumed, chief engineer Lt. Colonel Thomas Lincoln Casey determined that the base was too shallow and narrow to support the weight of the planned monument. (At that point, the in-progress monument already stood over 176 feet.) Under Casey's direction, the earth below the foundation was replaced with concrete, as was the original rubblestone base.
The monument is topped by a 55-foot pyramidion that begins at the 500-foot level. It weighs 336 tons and is covered with 262 marble slabs, each seven inches thick. Special machinery, platforms, and derricks were designed to build and assemble the massive top, which took six months of labor to complete.
- The monument is 555 feet tall and is modeled after an Egyptian obelisk. An obelisk is a tapering, four-sided stone structure where the height is 10 times the width of the base.
- Beginning at the 452-foot level, walls are entirely marble.
- The slim silhouette is just over 55 feet square at the base and narrows to just under 40 feet at the top of the shaft, 500 feet above the ground.
- The weight of the completed obelisk was so well distributed that it can withstand winds up to 145 miles per hour. A 30-mph wind causes a sway of just 0.125 inch at the peak.
- Thomas B. Allen, The Washington Monument: It Stands for All , New York: Discovery Books, 2000.
- U.S. Senate proceedings on the McMillan Plan for the monument grounds, 1902 |
- Assess the relative strengths of acids and bases according to their ionization constants
- Rationalize trends in acid–base strength in relation to molecular structure
- Carry out equilibrium calculations for weak acid–base systems
Acid and Base Ionization Constants
The relative strength of an acid or base is the extent to which it ionizes when dissolved in water. If the ionization reaction is essentially complete, the acid or base is termed strong; if relatively little ionization occurs, the acid or base is weak. As will be evident throughout the remainder of this chapter, there are many more weak acids and bases than strong ones. The most common strong acids and bases are listed in Figure 14.6.
The relative strengths of acids may be quantified by measuring their equilibrium constants in aqueous solutions. In solutions of the same concentration, stronger acids ionize to a greater extent, and so yield higher concentrations of hydronium ions than do weaker acids. The equilibrium constant for an acid is called the acid-ionization constant, Ka. For the reaction of an acid HA:
the acid ionization constant is written
where the concentrations are those at equilibrium. Although water is a reactant in the reaction, it is the solvent as well, so we do not include [H2O] in the equation. The larger the Ka of an acid, the larger the concentration of and A− relative to the concentration of the nonionized acid, HA, in an equilibrium mixture, and the stronger the acid. An acid is classified as “strong” when it undergoes complete ionization, in which case the concentration of HA is zero and the acid ionization constant is immeasurably large (Ka ≈ ∞). Acids that are partially ionized are called “weak,” and their acid ionization constants may be experimentally measured. A table of ionization constants for weak acids is provided in Appendix H.
To illustrate this idea, three acid ionization equations and Ka values are shown below. The ionization constants increase from first to last of the listed equations, indicating the relative acid strength increases in the order CH3CO2H < HNO2 <
Another measure of the strength of an acid is its percent ionization. The percent ionization of a weak acid is defined in terms of the composition of an equilibrium mixture:
where the numerator is equivalent to the concentration of the acid's conjugate base (per stoichiometry, [A−] = [H3O+]). Unlike the Ka value, the percent ionization of a weak acid varies with the initial concentration of acid, typically decreasing as concentration increases. Equilibrium calculations of the sort described later in this chapter can be used to confirm this behavior.
Calculation of Percent Ionization from pH Calculate the percent ionization of a 0.125-M solution of nitrous acid (a weak acid), with a pH of 2.09.
Solution The percent ionization for an acid is:
Converting the provided pH to hydronium ion molarity yields
Substituting this value and the provided initial acid concentration into the percent ionization equation gives
(Recall the provided pH value of 2.09 is logarithmic, and so it contains just two significant digits, limiting the certainty of the computed percent ionization.)
Check Your Learning Calculate the percent ionization of a 0.10-M solution of acetic acid with a pH of 2.89.
Just as for acids, the relative strength of a base is reflected in the magnitude of its base-ionization constant (Kb) in aqueous solutions. In solutions of the same concentration, stronger bases ionize to a greater extent, and so yield higher hydroxide ion concentrations than do weaker bases. A stronger base has a larger ionization constant than does a weaker base. For the reaction of a base, B:
the ionization constant is written as
Inspection of the data for three weak bases presented below shows the base strength increases in the order
A table of ionization constants for weak bases appears in Appendix I. As for acids, the relative strength of a base is also reflected in its percent ionization, computed as
but will vary depending on the base ionization constant and the initial concentration of the solution.
Relative Strengths of Conjugate Acid-Base Pairs
Brønsted-Lowry acid-base chemistry is the transfer of protons; thus, logic suggests a relation between the relative strengths of conjugate acid-base pairs. The strength of an acid or base is quantified in its ionization constant, Ka or Kb, which represents the extent of the acid or base ionization reaction. For the conjugate acid-base pair HA / A−, ionization equilibrium equations and ionization constant expressions are
Adding these two chemical equations yields the equation for the autoionization for water:
As discussed in another chapter on equilibrium, the equilibrium constant for a summed reaction is equal to the mathematical product of the equilibrium constants for the added reactions, and so
This equation states the relation between ionization constants for any conjugate acid-base pair, namely, their mathematical product is equal to the ion product of water, Kw. By rearranging this equation, a reciprocal relation between the strengths of a conjugate acid-base pair becomes evident:
The inverse proportional relation between Ka and Kb means the stronger the acid or base, the weaker its conjugate partner. Figure 14.7 illustrates this relation for several conjugate acid-base pairs.
The listing of conjugate acid–base pairs shown in Figure 14.8 is arranged to show the relative strength of each species as compared with water, whose entries are highlighted in each of the table’s columns. In the acid column, those species listed below water are weaker acids than water. These species do not undergo acid ionization in water; they are not Bronsted-Lowry acids. All the species listed above water are stronger acids, transferring protons to water to some extent when dissolved in an aqueous solution to generate hydronium ions. Species above water but below hydronium ion are weak acids, undergoing partial acid ionization, wheres those above hydronium ion are strong acids that are completely ionized in aqueous solution.
If all these strong acids are completely ionized in water, why does the column indicate they vary in strength, with nitric acid being the weakest and perchloric acid the strongest? Notice that the sole acid species present in an aqueous solution of any strong acid is H3O+(aq), meaning that hydronium ion is the strongest acid that may exist in water; any stronger acid will react completely with water to generate hydronium ions. This limit on the acid strength of solutes in a solution is called a leveling effect. To measure the differences in acid strength for “strong” acids, the acids must be dissolved in a solvent that is less basic than water. In such solvents, the acids will be “weak,” and so any differences in the extent of their ionization can be determined. For example, the binary hydrogen halides HCl, HBr, and HI are strong acids in water but weak acids in ethanol (strength increasing HCl < HBr < HI).
The right column of Figure 14.8 lists a number of substances in order of increasing base strength from top to bottom. Following the same logic as for the left column, species listed above water are weaker bases and so they don’t undergo base ionization when dissolved in water. Species listed between water and its conjugate base, hydroxide ion, are weak bases that partially ionize. Species listed below hydroxide ion are strong bases that completely ionize in water to yield hydroxide ions (i.e., they are leveled to hydroxide). A comparison of the acid and base columns in this table supports the reciprocal relation between the strengths of conjugate acid-base pairs. For example, the conjugate bases of the strong acids (top of table) are all of negligible strength. A strong acid exhibits an immeasurably large Ka, and so its conjugate base will exhibit a Kb that is essentially zero:
A similar approach can be used to support the observation that conjugate acids of strong bases (Kb ≈ ∞) are of negligible strength (Ka ≈ 0).
Calculating Ionization Constants for Conjugate Acid-Base Pairs Use the Kb for the nitrite ion, to calculate the Ka for its conjugate acid.
SolutionKb for is given in this section as 2.17 10−11. The conjugate acid of is HNO2; Ka for HNO2 can be calculated using the relationship:
Solving for Ka yields
This answer can be verified by finding the Ka for HNO2 in Appendix H.
Check Your LearningDetermine the relative acid strengths of and HCN by comparing their ionization constants. The ionization constant of HCN is given in Appendix H as 4.9 10−10. The ionization constant of is not listed, but the ionization constant of its conjugate base, NH3, is listed as 1.8 10−5.
is the slightly stronger acid (Ka for = 5.6 10−10).
Acid-Base Equilibrium Calculations
The chapter on chemical equilibria introduced several types of equilibrium calculations and the various mathematical strategies that are helpful in performing them. These strategies are generally useful for equilibrium systems regardless of chemical reaction class, and so they may be effectively applied to acid-base equilibrium problems. This section presents several example exercises involving equilibrium calculations for acid-base systems.
Determination of Ka from Equilibrium Concentrations Acetic acid is the principal ingredient in vinegar (Figure 14.9) that provides its sour taste. At equilibrium, a solution contains [CH3CO2H] = 0.0787 M and What is the value of Ka for acetic acid?
Solution The relevant equilibrium equation and its equilibrium constant expression are shown below. Substitution of the provided equilibrium concentrations permits a straightforward calculation of the Ka for acetic acid.
Check Your Learning The ion, weak acid used in some household cleansers:
What is the acid ionization constant for this weak acid if an equilibrium mixture has the following composition: = 0.027 M; and
Ka for = 1.2 10−2
Determination of Kb from Equilibrium Concentrations Caffeine, C8H10N4O2 is a weak base. What is the value of Kb for caffeine if a solution at equilibrium has [C8H10N4O2] = 0.050 M, = 5.0 10−3 M, and [OH−] = 2.5 10−3 M?
Solution The relevant equilibrium equation and its equilibrium constant expression are shown below. Substitution of the provided equilibrium concentrations permits a straightforward calculation of the Kb for caffeine.
Check Your Learning What is the equilibrium constant for the ionization of the ion, a weak base
if the composition of an equilibrium mixture is as follows: [OH−] = 1.3 10−6 M; and
Determination of Ka or Kb from pH The pH of a 0.0516-M solution of nitrous acid, HNO2, is 2.34. What is its Ka?
Solution The nitrous acid concentration provided is a formal concentration, one that does not account for any chemical equilibria that may be established in solution. Such concentrations are treated as “initial” values for equilibrium calculations using the ICE table approach. Notice the initial value of hydronium ion is listed as approximately zero because a small concentration of H3O+ is present (1 × 10−7 M) due to the autoprotolysis of water. In many cases, such as all the ones presented in this chapter, this concentration is much less than that generated by ionization of the acid (or base) in question and may be neglected.
The pH provided is a logarithmic measure of the hydronium ion concentration resulting from the acid ionization of the nitrous acid, and so it represents an “equilibrium” value for the ICE table:
The ICE table for this system is then
Finally, calculate the value of the equilibrium constant using the data in the table:
Check Your Learning. The pH of a solution of household ammonia, a 0.950-M solution of NH3, is 11.612. What is Kb for NH3.
Kb = 1.8 10−5
Calculating Equilibrium Concentrations in a Weak Acid Solution Formic acid, HCO2H, is one irritant that causes the body’s reaction to some ant bites and stings (Figure 14.10).
What is the concentration of hydronium ion and the pH of a 0.534-M solution of formic acid?
The ICE table for this system is
Substituting the equilibrium concentration terms into the Ka expression gives
The relatively large initial concentration and small equilibrium constant permits the simplifying assumption that x will be much lesser than 0.534, and so the equation becomes
Solving the equation for x yields
To check the assumption that x is small compared to 0.534, its relative magnitude can be estimated:
Because x is less than 5% of the initial concentration, the assumption is valid.
As defined in the ICE table, x is equal to the equilibrium concentration of hydronium ion:
Finally, the pH is calculated to be
Check Your Learning Only a small fraction of a weak acid ionizes in aqueous solution. What is the percent ionization of a 0.100-M solution of acetic acid, CH3CO2H?
percent ionization = 1.3%
Calculating Equilibrium Concentrations in a Weak Base Solution Find the concentration of hydroxide ion, the pOH, and the pH of a 0.25-M solution of trimethylamine, a weak base:
The ICE table for this system is
Substituting the equilibrium concentration terms into the Kb expression gives
Assuming x << 0.25 and solving for x yields
This value is less than 5% of the initial concentration (0.25), so the assumption is justified.
As defined in the ICE table, x is equal to the equilibrium concentration of hydroxide ion:
The pOH is calculated to be
Using the relation introduced in the previous section of this chapter:
permits the computation of pH:
Check Your LearningCalculate the hydroxide ion concentration and the percent ionization of a 0.0325-M solution of ammonia, a weak base with a Kb of 1.76 10−5.
7.56 10−4 M, 2.33%
In some cases, the strength of the weak acid or base and its formal (initial) concentration result in an appreciable ionization. Though the ICE strategy remains effective for these systems, the algebra is a bit more involved because the simplifying assumption that x is negligible can not be made. Calculations of this sort are demonstrated in Example 14.14 below.
Calculating Equilibrium Concentrations without Simplifying Assumptions Sodium bisulfate, NaHSO4, is used in some household cleansers as a source of the ion, a weak acid. What is the pH of a 0.50-M solution of
The ICE table for this system is
Substituting the equilibrium concentration terms into the Ka expression gives
If the assumption that x << 0.5 is made, simplifying and solving the above equation yields
This value of x is clearly not significantly less than 0.50 M; rather, it is approximately 15% of the initial concentration:
When we check the assumption, we calculate:
Because the simplifying assumption is not valid for this system, the equilibrium constant expression is solved as follows:
Rearranging this equation yields
Writing the equation in quadratic form gives
Solving for the two roots of this quadratic equation results in a negative value that may be discarded as physically irrelevant and a positive value equal to x. As defined in the ICE table, x is equal to the hydronium concentration.
Check Your Learning Calculate the pH in a 0.010-M solution of caffeine, a weak base:
Effect of Molecular Structure on Acid-Base Strength
Binary Acids and Bases
In the absence of any leveling effect, the acid strength of binary compounds of hydrogen with nonmetals (A) increases as the H-A bond strength decreases down a group in the periodic table. For group 17, the order of increasing acidity is HF < HCl < HBr < HI. Likewise, for group 16, the order of increasing acid strength is H2O < H2S < H2Se < H2Te.
Across a row in the periodic table, the acid strength of binary hydrogen compounds increases with increasing electronegativity of the nonmetal atom because the polarity of the H-A bond increases. Thus, the order of increasing acidity (for removal of one proton) across the second row is CH4 < NH3 < H2O < HF; across the third row, it is SiH4 < PH3 < H2S < HCl (see Figure 14.11).
Ternary Acids and Bases
Ternary compounds composed of hydrogen, oxygen, and some third element (“E”) may be structured as depicted in the image below. In these compounds, the central E atom is bonded to one or more O atoms, and at least one of the O atoms is also bonded to an H atom, corresponding to the general molecular formula OmE(OH)n. These compounds may be acidic, basic, or amphoteric depending on the properties of the central E atom. Examples of such compounds include sulfuric acid, O2S(OH)2, sulfurous acid, OS(OH)2, nitric acid, O2NOH, perchloric acid, O3ClOH, aluminum hydroxide, Al(OH)3, calcium hydroxide, Ca(OH)2, and potassium hydroxide, KOH:
If the central atom, E, has a low electronegativity, its attraction for electrons is low. Little tendency exists for the central atom to form a strong covalent bond with the oxygen atom, and bond a between the element and oxygen is more readily broken than bond b between oxygen and hydrogen. Hence bond a is ionic, hydroxide ions are released to the solution, and the material behaves as a base—this is the case with Ca(OH)2 and KOH. Lower electronegativity is characteristic of the more metallic elements; hence, the metallic elements form ionic hydroxides that are by definition basic compounds.
If, on the other hand, the atom E has a relatively high electronegativity, it strongly attracts the electrons it shares with the oxygen atom, making bond a relatively strongly covalent. The oxygen-hydrogen bond, bond b, is thereby weakened because electrons are displaced toward E. Bond b is polar and readily releases hydrogen ions to the solution, so the material behaves as an acid. High electronegativities are characteristic of the more nonmetallic elements. Thus, nonmetallic elements form covalent compounds containing acidic −OH groups that are called oxyacids.
Increasing the oxidation number of the central atom E also increases the acidity of an oxyacid because this increases the attraction of E for the electrons it shares with oxygen and thereby weakens the O-H bond. Sulfuric acid, H2SO4, or O2S(OH)2 (with a sulfur oxidation number of +6), is more acidic than sulfurous acid, H2SO3, or OS(OH)2 (with a sulfur oxidation number of +4). Likewise nitric acid, HNO3, or O2NOH (N oxidation number = +5), is more acidic than nitrous acid, HNO2, or ONOH (N oxidation number = +3). In each of these pairs, the oxidation number of the central atom is larger for the stronger acid (Figure 14.12).
Hydroxy compounds of elements with intermediate electronegativities and relatively high oxidation numbers (for example, elements near the diagonal line separating the metals from the nonmetals in the periodic table) are usually amphoteric. This means that the hydroxy compounds act as acids when they react with strong bases and as bases when they react with strong acids. The amphoterism of aluminum hydroxide, which commonly exists as the hydrate Al(H2O)3(OH)3, is reflected in its solubility in both strong acids and strong bases. In strong bases, the relatively insoluble hydrated aluminum hydroxide, Al(H2O)3(OH)3, is converted into the soluble ion, by reaction with hydroxide ion:
In this reaction, a proton is transferred from one of the aluminum-bound H2O molecules to a hydroxide ion in solution. The Al(H2O)3(OH)3 compound thus acts as an acid under these conditions. On the other hand, when dissolved in strong acids, it is converted to the soluble ion by reaction with hydronium ion:
In this case, protons are transferred from hydronium ions in solution to Al(H2O)3(OH)3, and the compound functions as a base. |
History of Schizophrenia
The history of schizophrenia is somewhat debatable as the term “schizophrenia” didn’t come into being until around 1908. What we do know is that forms of “madness” have been noted throughout medical history and likely some of these conditions are what we would recognize as schizophrenia today. In the early days of psychiatry, no distinctions were made between different types of madness.
The term “schizophrenia” literally means a splitting of the mind, which is unfortunate because this gives the impression that schizophrenia is a multiple personality or split personality disorder, which isn’t true. The term schizophrenia was chosen to denote the separation between personality, thinking, memory and perception.
Who Discovered Schizophrenia?
The word “schizophrenia” was coined by Eugen Bleuler, a Swiss psychiatrist but this isn’t when schizophrenia was discovered. It’s thought its predecessor, dementia praecox, was the first medical description of what we think of as modern schizophrenia.1 Bleuler documented schizophrenia’s “positive” and “negative” symptoms – terms we still use today.
Dementia praecox, a term first used in Latin, was discovered, or described, around 1891 by Arnold Pick, a professor of psychiatry at the German branch of Charles University in Prague. This discovery is often attributed to German psychiatrist, Emil Kraepelin, as he popularized the concept. Kraeplin divided dementia praecox into hebephrenia, catatonia and paranoid dementia subtypes, which are similar to the subtypes of schizophrenia classifications seen today.2
Modern History of Schizophrenia
While schizophrenia treatment once consisted of exorcisms and insulin shock treatment, the major breakthrough in the history of schizophrenia treatment came in 1952. That’s when Henri Laborit, a Parisian surgeon, discovered that chlorpromazine (Thorazine, now known as an antipsychotic) effectively treated the symptoms of schizophrenia. This discovery ushered in a time when people with schizophrenia were no longer confined to asylums (or mental hospitals) but could live in the community.
In the 1970s, as growing numbers of people with schizophrenia were being successfully treated with antipsychotic medication, groups and programs began to emerge to support them. Assertive Community Treatment (ACT) was developed to help these individuals and its programs are still in use and considered the “gold standard” for service delivery today. The National Alliance on Mental Illness (NAMI) also came into being in the 1970s to fight for the rights of those with a mental illness.3
Atypical antipsychotics, or second-generation antipsychotics, are now more commonly used to treat schizophrenia as they are thought to have a more tolerable side effect profile than first-generation antipsychotics. Psychosocial therapies are now also used to treat schizophrenia. Psychosocial interventions include:
- Family therapy
- Supported employment
- Skills training
- Cognitive behavioral therapy
- And others
Tracy, N. (2012, April 20). History of Schizophrenia, HealthyPlace. Retrieved on 2020, March 28 from https://www.healthyplace.com/thought-disorders/schizophrenia-information/history-of-schizophrenia |
The domestication of chickens has given rise to rapid and extensive changes in genome function. A research team at Linkping University in Sweden has established that the changes are heritable, although they do not affect the DNA structure.
Humans kept Red Junglefowl as livestock about 8000 years ago. Evolutionarily speaking, the sudden emergence of an enormous variety of domestic fowl of different colours, shapes and sizes has occurred in record time. The traditional Darwinian explanation is that over thousands of years, people have bred properties that have arisen through random, spontaneous mutations in the chickens' genes.
Linkping zoologists, with Daniel Ntt and Per Jensen at the forefront, demonstrate in their study that so-called epigenetic factors play a greater role than previously thought. The study was published in the high-ranking journal BMC Genomics.
They studied how individual patterns of gene activity in the brain were different for modern laying chickens than the original form of the species, the red jungle fowl. Furthermore they discovered hundreds of genes in which the activity was markedly different.
Degrees of a kind of epigenetic modification, DNA methylation, were measured in several thousand genes. This is a chemical alteration of the DNA molecule that can affect gene expression, but unlike a mutation it does not appear in the DNA structure. The results show clear differences in hundreds of genes.
Researchers also examined whether the epigenetic differences were hereditary. The answer was yes; the chickens inherited both methylation and gene activity from their parentages. After eight generations of cross breeding the two types of chickens, the differences were still evident.
The results suggest that domestication has led to epigenetic changes. For more than 70 % of the genes, domesticated chickens retained a higher degree of methylation. Since methylation is a much faster process than random mutations, and may occur as a result of stress and other experiences, this may explain how variation within a species can increase so dramatically in just a short time. Ntt and Jensen's research may lead to a review of the important foundations for the theory of evolution.
|Contact: Per Jensen| |
Understanding a virus
A Virus is a small particle that can replicate only inside the organisms living cells and they can infect all types of organisms like animals, plants and even bacteria. Normally viruses particles consist of two or three particles which are the genetic material, long molecules and a protein coat and in some cases there can be and envelope of lipids too. Their shapes range from simple helical to more complex structures and its average size will be about one-hundredth size of an average bacteria. The origin of virus or cause of virses is still not clear but believed to have evolved from plasmids or from bacteria. Viruses are termed to be the most important ways of gene transfer for increased genetic activity.
There are three main hypotheses that can explain the origin or cause of virses.
Virus will be found wherever there is life and are supposed to be existed from the time the life cells have been formed in this world. This hypotheses mentions viruses once have been smaller cells and joined together to become larger ones. The bacteria and living cells they support this as their dependence on parasitism has caused the loss of genes in them and enabled them to survive alone outside a cell. This is also called degeneracy hypothesis.
Cellular origin hypothesis
Another important cause of virses is it is said they must have evolved from bits of DNA or RNA that has been escaped from the genes of a much larger organism. These escaped DNA can be evolved from either plasmids or transposons. Transposons are examples of genetic elements that can cause the origin of viruses.
Co evolution Hypotheses
This hypothesis is also called as virus-first hypotheses and says that the cause of virus may be evolved from complex protein and nucleic acid. This hypothesis also gives a completely new explanation on the cause of virses from the various cells present in the human body and its various efforts as a whole.
There are different stands that still persist on the causes of virses. It is said that virus causes some common type of infections in human body. There are many types of viruses in the human body that can cause various types of diseases and most of the time a particular type of virus can cause a particular type of infection in a human body. Infections due to viruses just pass through from one person to other. The virus generally moves from one hand to others while he/she interacts with others or be in touch with them.
The viruses that are present in the body will turn to be infectious or can get when a person be in touch with the other normal person. Viruses causes can be reduced to a great extend to prevent various diseases that can affect the human body. The causes of many serious or fatal diseases are due to viruses hence one should be careful enough of diseases caused with virus. Lead a healthy life and get away from all heath issues and be happy always.
Most Recent Articles
What Is Called As Swine Disease
The Pandemic H1N1/09 virus is believed to have originated from pigs found in Asia but reached North America through a human host. It has been considered more contagious than human ...
What Is Communicable Diseases
Communicable diseases basically mean those diseases which are spread from one person to another. Diseases communicable are often called infectious disease or contagious diseases. Basically ...
About Hiv And Aids
It was in 1981 that the word was introduced to the disease that is known to cause major damage on your immune system. But it was only in 1984 that caused a panic to the world because of its ... |
Cryptography is a method of storing and transmitting data in a particular form so that only those for whom it is intended can read and process it. Cryptography uses ciphers to encrypt the information and later decrypt the same information to the intended receiver only. Modern cryptography is heavily based on complex mathematical analysis and computer science. The algorithm is designed to ensure computational hardness assumptions making it almost impossible for the unintended recipient to break it.
Software cryptography uses software to encrypt data in a computer environment. It is highly dependent on the security levels of the operating system as it is strongly backed by OS’s resources. Whereas software based encryptions are easier to develop and maintain when it comes to cryptographic modules or security related applications, the implementation of software is less preferred. Hardware encryption is the most preferred cryptography techniques in security modules. Hardware encryption uses a dedicated processor physically located on the encrypted device. This processor contains a generated key with a user’s password to unlock it. This leads to more security, as the authentication is performed on the hardware.
Hardware encryption has various advantages over software encryption. Software implementations are more readily readable than hardware implementations. Code that is readily readable is highly susceptible to reverse engineering. The system can be altered to perform different unintended operations.
Most software implementation makes use of DSP circuits that allow faster multiplications. A DSP ensures faster implementation than regular software codes. The drawback of using DSP circuits is concerned with the security involved. DSP is an open implementation receiving inputs and giving outputs through a publicly accessible bus. Using this mechanism for private key operations is extremely risky. Since multiplications are only some of the required operations in public key cryptography, temporary values are left on the bus between operations. These values can be easily revealed exposing the special secret value. This can then be modified by a network intruder and used to modify the software solution.
Software systems cannot facilitate their own physical memory. They have to make use of externally available memory through an underlying operating system. External memory might be accessed by different other processes. If the operating system is not robust enough or is corrupt, the software implementation can be accessed by malicious software and edited to suit other different functions.
The current trends, especially in the design of new microprocessors, are geared towards hardware cryptography, and for good reasons. With the evolution of Internet of Things (IOT), security will have to be a key factor in electronics design. This security can be best achieved through the utilization of hardware cryptography. |
In transportation infrastructure design the route is defined based on its axis (centreline) – the alignment model. This simplified abstract model is designed in such a way to clearly define the principle course of the infrastructure project.
For most of the transportation means the infrastructure alignment design is split into two main two-dimensional complex strings: the horizontal alignment and the vertical alignment.
The horizontal alignment consist of three types of elements:
- Line segments – straights
- Circular arcs
- Transition curves
The movement of the transportation vehicle over circular elements is generating lateral accelerations, dependant on the speed of the vehicle and on the radius of the curve. In order to avoid the sudden change of these accelerations when passing from a curve to a different curve or a straight, the horizontal alignment includes transition curves. These curves have a continuous variation of curvature, ensuring the smooth transition between the horizontal alignment elements of constant curvature.
The most well-known transition curve is the Clothoid. In its real world application the clothoid enables a car driver to ride smoothly by turning the steering wheel with a constant speed, defining a clothoidal spiral, a continuous and linear curvature variation.
A curve with so many names …
The clothoid equations were first defined by Leonhard Euler; this is why, in general Physics the curve is often called Euler spiral. The French physicists Augustin-Jean Fresnel and, later, Alfred Cornu, rediscovered the curve and defined its parametric equations – hence the curve is sometimes called Fresnel or Cornu spiral.
(source of the image: Levien, R. (2008) The Euler Spiral: A mathematical history.)
In 1890, Arthur N. Talbot, Professor of Municipal and Sanitary Engineering at the University of Illinois, defined for civil engineering the “railway transition curve” (Talbot – 1912), with similar equations as Euler did for elasticity and Fresnel and Cornu for optical applications.
The name clothoid was suggested by the Italian mathematician Ernesto Cesàro. The word clothoid comes from klothos, the Greek word for spin (wool) the shape of the curve thread that wraps around the spindle. The same root appears in the name of Clotho (The Spinner), one of the three Fates who holds the thread of human destiny.
Clothoid geometry and (some) math
The clothoid equations can be defined starting from the condition of linear relation between radius and length:
This defines an infinite spiral, starting from the origin (x=0, y=0, R=∞, L=0) and spinning in two infinite loops to two points where R=0 and L=∞:
The constant A is called flatness or homothetic parameter of the clothoid.
The clothoid coordinates and the majority of the other characteristic elements of the spiral can be defined based on this essential parameter. For example, the clothoid curvature gradient is 1 in A².
In road design, an alignment developed to have all the transitions with practically the same A is seen as an optimum design, providing similar comfort conditions from the point of view of the variation of the lateral acceleration across all the transitions of that route. Some of the national road design standards are considering this criteria together with superelevation and aesthetic conditions (TAC – 2009).
In railway design an alignment developed with constant A is providing constant rate of change of the Equilibrium Cant (the sum of the rates of change of cant and cant deficiency). If this is done together with a constant E/D ration across all the curves, that alignment will provide practically the same comfort conditions over all the transitions and also similar wear conditions for top of railhead.
The angle αi , measured between the tangent in the current point i of the clothoid and the initial direction (where R = ∞), is called direction angle and is computed dependant on the length of the clothoid arc to the current point Li , and the current radius Ri :
where L is the total length of the clothoid and R its final radius.
The Cartesian coordinates of the clothoids can be defined starting from the above equation; using Euler (Fresnel) integrals for sine and cosine, these coordinates are:
Both equations have infinite terms. For railway track and road design, only a part of the clothoid can be practically used – the arc starting from origin to the area where the deflection angle α ≈ 90°.
For this section of the clothoid arc the first 4 terms of the equations provide sufficient accuracy to allow the site tracing of the transition curve.
Before the introduction of the Computerised Aided Design (CAD) in the alignment design for road and railway, the clothoid coordinates were computed based on a set of tables defined for a reference clothoid (usually the one defined by Aₒ = 100) and using the following relation:
Any other clothoid can be defined based on this property (called homothety relative to the origin O) :
As the clothoid spiral develops, the circle defined by the radius R is gradually shifted away from the x axis. This shift is defined as:
The first term of this equation defines the shift for the cubic parabola.
(to be continued)
Later edit: (09/10/2016) The direction angle equation was added based on the blog comments.
Cope, G. (1993) British Railway Track – Design, Construction and Maintenance. Permanent Way Institution, Echo Press, Loughborough.
Racanel, I. (1987) Drumuri moderne. Racordari cu clotoida. (Modern roads. Clothoidal transitions). Editura Tehnica, Bucharest. Romania.
Radu, C. (2001) Parabola cubica imbunatatita (Improved cubic parabola). Technical University of Civil Engineering Bucharest.
Radu, C., Ciobanu, C. (2004) Elemente referitoare la utilizarea parabolei cubice imbunatatite (Elements related to the use of the improved cubic parabola). Third Romanian National Railway Symposium, Technical University of Civil Engineering Bucharest. Romania.
TAC (2009) Geometric Design Manual for Canadian Roads. – Volume 2. Transportation Association of Canada |
Although Vietnam has made rapid progress in improving adequate access to clean water over the past decades, the poorest communities – particularly the ethnic minorities and those living in remote rural areas – are not gaining as much.
Nation-wide, 1.1 percent of the urban and 8.6 percent of the rural population practice open defecation respectively. Open defecation is largely practiced by the poor (22.9 percent) and ethnic minority groups (27.5 percent). Water contaminated by human waste spreads numerous diseases such as diarrhea, cholera, dysentery, hepatitis A and typhoid, especially in the wake of natural disasters.
Access to safe sanitation and clean water ─ fundamental tools of public health improvement ─ goes hand-in-hand with decent housing. Habitat for Humanity helps to provide families with affordable access to adequate water and sanitation facilities through microloans. Communities also receive educational training in basic health knowledge, the importance of clean water and safe sanitation as well as proper personal hygiene. In addition, Habitat provides technical support for the construction of:
- Rainwater harvesting systems and septic tanks
- Water filters |
It takes children a while to learn to differentiate between nouns, verbs and adjectives. Every year there are always some children in year 1 that I tutor and they always get stuck here. Some of them can easily tell me the meaning of nouns, verbs or adjectives - they have memorised it. But it doesn't mean that they have understood it.
The solution is to keep practicing in differentiating between the three. To help you to help your child, use this parts of speech word sort worksheet (pdf). When s/he has finished it you can go through each group. With the nouns, ask your child to bring you or show you these things. For verbs, have your child perform each action. As for adjectives, have him or her show you what something looks like if it was that word.
If this worksheet has helped in any way, then tell me about it in the comments below. |
George Polya was a mathematician. Like most mathematicians, he was concerned with very strange concepts. One of them was the idea of "random walks," or the completely random path a strolling insect might take. He took this concept and expanded it until he could prove the chances of getting hopelessly, unendingly lost in the universe. Find out why.
Let's say that there is a universe that has nothing but space, time, and an immortal bug (hey, there are stranger ideas). This bug will always keep walking, no matter what. And since it has no higher purpose (and since any one place in universe it occupies is just as good as another), its walk is completely random. In 1921, in our universe — which has many more diversions than the bug universe — Hungarian math professor George Polya considered the odds that the beetle would ever make it back to the spot it originally inhabited.
In a one-dimensional universe (essentially a universe along a straight line) the bug would — with enough random walking and an infinite amount of time — make it back to its original starting point. It might make three steps forward and three steps back, and be done within a minute. Or it could wander for centuries. Eventually, though, it had to return home.
In a two-dimensional universe, a universe in which the beetle could take any path along a plane, the beetle would also always return home. This one might take a little longer, since the beetle had more freedom to wander, but the fact that is it would return back to the place it started.
A three-dimensional universe (like the one we live in) is the first universe in which the beetle might not make it home again. Polya crunched the numbers and came up with a disturbing conclusion. Assuming the bug could take any random walk through three-dimensional space, its chances of making it back home (after an infinite amount of time) are 0.34, or thirty-four percent. In our 3D universe, for the first time, there is such a thing as "never being able to go home again." No matter how long it wanders, the beetle has a two-thirds chance of being hopelessly lost.
It gets worse from there on up. For an n-dimensional universe, a random walk's chance of taking you back is (1/2n). The longer you walk, the less of a shot you have of going home, since you're likely to get more and more lost. So if you get dropped into a six-dimensional world, it's useful to pretend you just got lost in the woods. Don't try to find your way back. Just sit down, try to stay warm, and wait for the search-and-rescue to come find you. So yes, carry a whistle if you ever find yourself in six dimensions.
It's strange that the universe we occupy is the first one that carries the possibility of never getting the desired outcome. We occupy the world in which, even with an infinite amount of time, hopelessness is possible. On the other hand, it's all a matter of perspective. Plenty of people like the idea of permanently shaking a place that they don't like. Maybe we should look at George Polya's strange work as proof that we are in the first dimensional universe in which boredom is avoidable. |
Historians have identified more than twenty dynasties in Chinese history. Usually the transition between dynasties was caused by an invasion or a rebellion; very often the new rulers tried to erase all references to their predecessors, by destroying their palaces and funerary monuments and by relocating the capital of the country. From this point of view the accession to power of the Qing dynasty which took place around 1644 was marked by a more respectful approach towards what had been done by the previous rulers, the Ming Emperors. This occurred because the Qing, claimed to be the avengers of the last Ming Emperor who was overthrown by an internal rebellion and who committed suicide. The Qing were the leaders of the Manchu, an ethnic minority group, and they did not bring significant changes to Chinese art: this explains why the tombs of the Ming emperors were not destroyed or modified.
(left) Shengong Shengde Stela Pavilion; (right) one of the "huabiao" (pillars of glory) placed near the pavilion
The tombs are situated in a valley some thirty miles to the north of Beijing; the chosen site complied with Feng shui rules, the set of ancient astronomical/philosophical doctrines which were embedded in Chinese culture. The access to the valley is from the south; the beginning of the area where the tombs are located is announced by a large gate; detailed funerary processions started from this point.
The Qing emperors added triumphal columns which are decorated with reliefs showing an entwined dragon; they support the statue of a hou, a mythical creature having the habit of watching the sky.
Tortoise inside Shengong Shengde Stela Pavilion
Another addition/restoration made in 1785 by the Qing is the gigantic statue of a tortoise supporting a tablet celebrating the dead emperors. This beast was a symbol of water and of the North, but in the context of a burial site it represented longevity. Although the representation of animals (true or mythological) spans over the whole history of China it was particularly frequent during the Qing dynasty.
Final part of the sacred way
In architecture symmetry is closely associated with beauty and with conveying to the viewer a feeling of power; this concept was common also in China and can be observed in the Forbidden City, but it was departed from in the design of the alley leading to the tombs. In order to lead astray the evil spirits, the alley makes a bend.
A standing lion and a standing horse
The tombs were guarded by pairs of animals which were portrayed standing and seated; these different postures meant day or night services to the dead emperors. The finest statues are those of mythical beasts or of actual ones, but treated with a lot of fantasy, such as the lions, while those portraying horses and camels are uninspiring. The statues date from the XVth century.
A pair of "xiechi"
The xiechi (or xiezhi) is a horned cat that was believed to have the power to discern right and wrong and to keep evil spirits away. The overall design of the site does not fit into a specific belief, although Ming emperors are known for having blandly favoured Buddhism. It can be regarded as an example of religious syncretism, the blending of different traditions.
A military leader
The statues guarding the tombs include six pairs of civil and military officials. These statues are called wengzhong after a mythological Ruan Wengzhong who distinguished himself in fighting the Huns. The military officials wear an elaborate armour, hold a weapon in the hand and have a sword at their waist; they look determined and powerful.
A civilian officer and a detail highlighting his headdress
The civilian officers wear official and very unusual hats and hold the court sceptre in the hand; they have the attitude of a gentle philosopher.
Ling En Gate
The end of the long alley (Shinto) guarded by the statues was also the end of the common path of the funerary processions; there are thirteen tombs of Ming emperors in the valley. That of Emperor Yongle (1402-24), who moved the capital from Nanjing to Beijing, is accessed through Ling En Gate, a platform surrounded by balustrades, very similar to the gates of the Forbidden City.
Ling En Hall
The gate leads to a courtyard closed by a larger pavilion; the yellow tiles of the roof and the series of nine glazed ceramic figures are a sign that the building was meant for an emperor.
(left) Neihong Gate; (right) Erzhu Gate and Fangcheng (Square Tower)
Neihong Gate gives access to the last courtyard, at the centre of which a gate with a step prevented evil spirits from reaching the final building, a sort of tower. Visitors leaving the site are alerted to cross the step with the same foot they used the first time. This brings luck or at least it avoids unlucky consequences.
Detail of the Incense Burner
The final ceremonies took place in the last courtyard where there is an incense burner similar to a small temple: it is made of yellow and green glazed materials. The actual tomb of Emperor Yongle is inside a mound, a reference to a tortoise, symbol of longevity and eternal memory.
(left) Detail of the balustrade; (right) modern statue of Emperor Yongle
Beijing - The Forbidden City
Beijing - The Temple of Heaven and other monuments south of Tiananmen
Beijing - The Lama Temple
Beijing - The Summer Palace
Beijing - Scenes of ordinary life
Beijing - Contemporary Architecture |
Word may refer to any of the following:
1. In general, a word is a single element of speech that is typically separated by spaces and helps form a sentence. For example, this sentence contains seven words. The English language contains several hundred thousand different words and Computer Hope lists over 14,000 computer related words in its computer dictionary.
2. In computing, a word is a single unit of measurement that is assumed to be a 16-bits in length value. However, it can be any set value, common values included: 16, 18, 24, 32, 36, 40, 48, and 64.
3. Sometimes called Winword, MS Word, or Word, Microsoft Word is a word processor published by Microsoft. It is one of the office productivity applications included in Microsoft Office. Originally developed by Charles Simonyi and Richard Brodie, it was first released in 1983.
What is Word used for?
Microsoft Word allows you to create professional-quality documents, reports, letters, and résumés. Unlike a plain text editor, Microsoft Word has features including spell check, grammar check, text and font formatting, HTML support, image support, advanced page layout, and more.
What does the Word editor look like?
Below is an overview of a Microsoft Word 2010 document.
Where do you find or start Microsoft Word?
If you have Microsoft Word or the entire Microsoft Office package installed on Microsoft Windows, you can find Microsoft Word in your Start Menu.
Keep in mind that new computers do not include Microsoft Word. It must be purchased and installed before it can be run on your computer. If you do not want (or cannot afford) to purchase Microsoft Word, you can use a limited version for free at www.office.com.
Note: there are also free word processing programs you can try that are very similar to Microsoft Word.
If Microsoft Word is installed on your computer, but you can't find it in your Start Menu, use the following steps to manually launch Microsoft Word:
- Open My Computer.
- Click on or select the C: drive. If Microsoft Office is installed on a drive other than the C: drive, select that drive instead.
- Navigate to the Program Files (x86) folder, then the Microsoft Office folder.
- In the Microsoft Office folder, if there is a root, open that folder. Then open the OfficeXX folder, where XX is the version of Office (e.g. Office16 for Microsoft Office 2016). If there is no root folder, look for and open a folder having "Office" in the name.
- Look for a file named WINWORD.EXE and click or double-click that file to start the Microsoft Word program.
What type of files can Microsoft Word create and use?
Early versions of Microsoft Word primarily created and used the .doc file extension, while newer versions of Word create and use the .docx file extension.
More recent versions of Microsoft Word can create and open the following types of files:
- .doc, .docm, .docx
- .dot, .dotm, .dotx
- .htm, .html
- .mht, .mhtml
Example of a Microsoft Word .doc file
You can download an example of a Microsoft Word .doc document by clicking the Microsoft Word .doc file link. |
Overview of Peste des Petits Ruminants
Peste des petits ruminants (PPR) is an acute or subacute viral disease of goats and sheep characterized by fever, necrotic stomatitis, gastroenteritis, pneumonia, and sometimes death. It was first reported in Cote d’Ivoire (the Ivory Coast) in 1942 and subsequently in other parts of West Africa. Goats and sheep appear to be equally susceptible to the virus, but goats exhibit more severe clinical disease. The virus also affects several wild small ruminant species. Cattle, buffalo, and pigs are only subclinically infected. People are not at risk.
The causal virus, a member of the Morbillivirus genus in the family Paramyxoviridae, preferentially replicates in lymphoid tissues and epithelial tissue of the GI and respiratory tracts, where it produces characteristic lesions.
PPR has been reported in virtually all parts of the African continent, except for the southern tip; the Middle East; and the entire Indian subcontinent. In the last 15 yr, PPR has rapidly expanded within Africa and to large parts of Central Asia, South Asia, and East Asia (including China).
Because PPR virus and the now-eradicated rinderpest virus (see Rinderpest) are cross-protective, it is possible that the recent rapid expansion of the PPR virus within endemic zones and into new regions may be because of disappearance of the cross-protection previously afforded by natural rinderpest infection of small ruminants and/or the hitherto use of rinderpest vaccine to prevent small ruminant infection with PPR virus in certain endemic areas. Based on this theory, PPR virus has the potential to cause severe epidemics, or even pandemics, in more small ruminant populations in an increasingly expanding area of the developing world.
At a local level, such epidemics may eliminate the entire goat or sheep population of an affected village. Between epidemics, PPR can assume an endemic profile. Mortality and morbidity rates vary within an infected country, presumably due to two factors: the varying immune status of the affected populations and varying levels of viral virulence.
Transmission is by close contact, and confinement seems to favor outbreaks. Secretions and excretions of sick animals are the sources of infection. Transmission can occur during the incubation period. It is generally accepted that there is no carrier state. The common husbandry system whereby goats roam freely in urban areas contributes to spread and maintenance of the virus. There are also numerous instances of livestock dealers being associated with the spread of infection, especially during religious festivals when the high demand for animals increases the trade in infected stock.
Several species of gazelle, oryx, and white-tailed deer are fully susceptible; these and other wild small ruminants may play a role in the epidemiology of the disease, but few epidemiologic data are available for PPR in wild small ruminants. Cattle, buffalo, and pigs can become naturally or experimentally infected with PPR virus, but these species are dead-end hosts, because they do not exhibit any clinical disease and do not transmit the virus to other in-contact animals of any species.
The acute form of PPR is accompanied by a sudden rise in body temperature to 40°–41.3°C (104°–106°F). Affected animals appear ill and restless and have a dull coat, dry muzzle, congested mucous membranes, and depressed appetite. Early, the nasal discharge is serous; later, it becomes mucopurulent and gives a putrid odor to the breath. The incubation period is usually 4–5 days. Small areas of necrosis may be observed on the mucous membrane on the floor of the nasal cavity. The conjunctivae are frequently congested, and the medial canthus may exhibit a small degree of crusting. Some affected animals develop a profuse catarrhal conjunctivitis with matting of the eyelids. Necrotic stomatitis affects the lower lip and gum and the gumline of the incisor teeth; in more severe cases, it may involve the dental pad, palate, cheeks and their papillae, and the tongue. Diarrhea may be profuse and accompanied by dehydration and emaciation; hypothermia and death follow, usually after 5–10 days. Bronchopneumonia, characterized by coughing, may develop at late stages of the disease. Pregnant animals may abort. Morbidity and mortality rates are higher in young animals than in adults.
Emaciation, conjunctivitis, and stomatitis are seen; necrotic lesions are observed inside the lower lip and on the adjacent gum, the cheeks near the commissures, and on the ventral surface of the tongue. In severe cases, the lesions may extend to the hard palate and pharynx. The erosions are shallow, with a red, raw base and later become pinkish white; they are bounded by healthy epithelium that provides a sharply demarcated margin. The rumen, reticulum, and omasum are rarely involved. The abomasum exhibits regularly outlined erosions that have red, raw floors and ooze blood.
Severe lesions are less common in the small intestines than in the mouth, abomasum, or large intestines. Streaks of hemorrhages, and less frequently erosions, may be present in the first portion of the duodenum and terminal ileum. Peyer’s patches are severely affected; entire patches of lymphoid tissue may be sloughed. The large intestine is usually more severely affected, with lesions developing around the ileocecal valve and at the cecocolic junction and rectum. The latter exhibits streaks of congestion along the folds of the mucosa, resulting in the characteristic “zebra-striped” appearance.
Petechiae may appear in the turbinates, larynx, and trachea. Patches of bronchopneumonia may be present.
A presumptive diagnosis is based on clinical, pathologic, and epidemiologic findings and may be confirmed by viral isolation and identification. Historically, simple techniques such as agar-gel immunodiffusion have been used in developing countries for confirmation and reporting purposes. However, PPR virus cross-reacts with rinderpest virus in these tests. Virus isolation is a definitive test but is labor intensive, cumbersome, and takes a long time to complete. Currently, antigen capture ELISA and reverse transcription-PCR are the preferred laboratory tests for confirmation of the virus. For antibody detection (such as might be needed for epidemiologic surveillance, confirmation of vaccine efficacy, or confirmation of absence of the disease in a population), competitive ELISA and virus neutralization are the OIE-recommended tests. The specimens required are lymph nodes, tonsils, spleen, and whole lung for antigen or nucleic acid detection, and serum (from unclotted blood) for antibody detection, The virus neutralization test may also be used to confirm an infection if paired serum samples from a surviving animal yield rising titers of ≥4-fold. PPR must be differentiated from other GI infections (eg, GI parasites), respiratory infections (eg, contagious caprine pleuropneumonia), and such other diseases as contagious ecthyma, heartwater, coccidiosis, and mineral poisoning.
Local and federal authorities should be notified when PPR is suspected. PPR is also an OIE-reportable disease worldwide. Eradication is recommended when the disease appears in previously PPR-free countries. There is no specific treatment, but treatment for bacterial and parasitic complications decreases mortality in affected flocks or herds. An attenuated PPR vaccine prepared in Vero cell culture is available and affords protection from natural disease for >1 yr. Encouraged by the successful global eradication of rinderpest, international organizations such as OIE, Food and Agriculture Organization of the United Nations (FAO), and International Atomic Energy Agency (IAEA) are making plans (2015) for global eradication of PPR. The available homologous PPR vaccine would play an important role in that effort. |
Compare School Environment to the Rain forest
Show the PBS Kids: Plum Landing intro jungle video, which is about 4 minutes. Have a discussion and create a WeKWL chart about what students know about tropical rain forests, specifically focusing on the plants and animals there. Continue to add to the WeKWL chart throughout the lesson.
2 Direct Instruction
Use Nearpod to teach students about the rain forest and read aloud Magic School Bus: In the Rain Forest by Eva Moore.
3 Guided Practice
Give students about half hour to learn about plants and animals in the rainforest by reading books in the Epic app. A search for "rainforest" brings up several books that are appropriate for this age range. Remind students to specifically focus on plants and animals and remind them to add new knowledge to the class WeKWL.
To learn more about rain forest plants and animals, give students 15 minutes to play the Jungle Rangers activity on the PBS website. They will not complete Jungle Rangers in this amount of time, but it will give them a chance to learn about rain forest animal adaptations and camouflage and plant seed dispersal and pollination as well as layers of the rain forest.
Pair share and then have a whole class discussion about similarities and differences between rain forest plants and animals and the plants and animals that live around the school.
4 Independent Practice
Have students use the Plum's Photo Hunt app to take pictures of plants and animals around the school using the photo missions portion of the app. For this lesson I limited my students to taking pictures of animal life, trees and flowers, animal habitats, and insects and spiders. For each photo they took they used the field journal section of the app to write a complete sentence caption describing the photo.
As an added challenge students could add more to the caption comparing or contrasting the plant or animal to a plant or animal they learned about in the rain forest.
Students presented their field journals to the class using air server. If they did not include compare/contrast comments in their field journal they added this information as they shared their photos with the class. |
THE INTERNATIONAL BACCALAUREATE DIPLOMA CURRICULUM FOR ENGLISH
klik hier voor de Nederlandse versie
Language A2 (vwo) – this is a language course for near-native or native speakers in which pupils study both language and literature. Pupils will thus be able to use the language for purposes and in situations involving sophisticated discussion, argument and debate. Language A2 courses are available at both Higher and Standard Level.
Language B (havo) – this is a language learning course for pupils with some previous experience of learning the target language. The main focus of this course is on language acquisition, and pupils have the opportunity to reach a high degree of competence in a language and explore the culture using the language. The range of purposes and situations for which and in which the language is used extends to the domains of work, social relationships, and the discussion of abstract ideas. The English Language B course is available at Higher and Standard Level.
Languages A2 Higher and Standard Level (vwo)
The IB course is not based on topics but on registers (e.g. formal and informal letters, diary, editorial, brochure, essay) and so the standard of writing is high for IB. Topics studied in class are extremely varied and could include issues such as immigration, education, media, and literature. Pupils should get an insight into, and an appreciation of, the Anglo Saxon culture. The HL and SL courses have identical syllabuses and examinations, though the HL pupils examine topics in more depth and would study more literature than SL pupils. Thus while the same principles underlie both courses, HL examinations mark schemes are naturally more rigorous.
Core Content: The course presupposes a near native mastery of the language in question. Thus it is not a language acquisition course. Pupils study oral and written forms of the language in a range of styles, registers, and situations; how to structure arguments in a focused, coherent and persuasive way; how to engage in detailed, critical examination of a wide range of texts in different forms, styles, and registers and how to compare different texts.
Options: Language and Culture, Media and Culture, Future Issues, Global Issues, Social Issues, and Literary Options
Internal Assessment: 30% - two orals
o one group oral (15%)
o one individual (15%)
External Assessment: 70% - examination
o Comparative text commentary: Candidates write a comparative commentary on a pairs of texts – 25%.
o Essay: Candidates answer one essay question from a choice of 10 on the option topics above – 25%.
o Written Tasks: Candidates complete two written tasks - one is based on a literary option, the other on a cultural option (e.g. letter to the editor about advertising) – 20%.
Languages B Higher Level (havo)
Speaking: Pupils aim to become fluent in the target language. By the end of the course they should be able to use a range of tenses, vocabulary and registers in spontaneous formal and informal conversation.
Reading: Pupils need to interpret a variety of authentic texts and show understanding of specific language items. Pupils must also understand the overall meaning of texts, for example by writing a letter in response to a given text.
Writing: Pupils must be able to convey ideas clearly, grammatically and coherently.
Internal Assessment: Interactive oral activity: 15%. Individual oral: 15%.
External Assessment: 70% Examination
Paper 1: Text handling and written response in target language (40%).
Paper 2: Two pieces of writing in the target language using a variety of registers (30%). |
This list is not meant to be all-inclusive. Rather, it should be used to stimulate and encourage other ideas and possibilities on the part of the students.
There is certainly lots to explore, lots to discover, and lots to investigate in a science fair.
- How do temperature changes affect a fish?
- Do preservatives stop bread mold from growing?
- How leaves lose water
- The effect of sunlight on plants
- What fabrics make good insulators?
- Materials that are the best
conductors of electricity
- How are crystals formed?
- Removing salt from water
- The three layers of the earth
- Create your own fossils
- The ocean floor
- Taste buds on the tongue
- What does a magnetic field look like?
- Properties of minerals
- Food chains and food webs
- How animals live underground
- The life cycle of non-seed plants
- How plants make food
- How animals and plants adapt in order to survive
- How rocks are formed
- How air temperature changes
- Similarities and differences between the planets
- Compare predicted weather with actual weather
- Bird's nest
- Series and parallel circuits
Back to Introduction
Back to Helping Students Select a Topic |
Many different kinds of people use Augmentative and Alternative communication to speak and connect with the ones they love. Severe speech impediments do not discriminate and can be caused by a number of different reasons. Most of the people who use AAC devices were born with a cognitive disability. Most commonly, these impairments include but are not limited to: apraxia speech disorders, autism spectrum disorders, cerebral palsy, and chromosomal disorders. Other users include people who developed a disease or had a traumatic event. Those with ALS often use AAC devices in the later stages of the disease. Patients who have had a stroke often lose the ability to speak and therefore must use these devices until they regain that ability through speech therapy. Those with traumatic brain or spinal injuries also use AAC devices, as both of these body parts are essential to our ability to speak.
Students in classrooms using AAC would most likely be students who were born with a condition that limits their ability to speak. Most will likely be in a classroom dedicated to students who have disabilities, so it is incredibly important that a special education teacher knows how to use many different types of AAC devices. Additionally, each student may have a different ability to use the device. Many disorders that cause a speech impediment also cause physical impairments often leaving students in a wheelchair with limited ability to move. Therefore, the teacher must be able to help different students use their devices if they have trouble. AAC devices are incredibly important and help so many students and adults alike communicate with those they love. |
Auxins (plural of auxin //) are a class of plant hormones (or plant growth substances) with some morphogen-like characteristics. Auxins have a cardinal role in coordination of many growth and behavioral processes in the plant's life cycle and are essential for plant body development. Auxins and their role in plant growth were first described by the Dutch scientist Frits Warmolt Went. Kenneth V. Thimann isolated this phytohormone and determined its chemical structure as indole-3-acetic acid (IAA). Went and Thimann co-authored a book on plant hormones, Phytohormones, in 1937.
- 1 Overview
- 2 Discovery of auxin
- 3 Hormonal activity
- 4 Effects
- 5 Synthetic auxins
- 6 See also
- 7 References
Auxins were the first of the major plant hormones to be discovered. They derive their name from the Greek word αυξειν (auxein - "to grow/increase"). Auxin (namely IAA) is present in all parts of plant body, altough in very different concentrations. The concentration in each position is crucial developmental information and so it is subject to tight regulation through both metabolism and transport. The result is the auxin creates a "patterns" of auxin concentration maxima and minima in plant body, which in turn guides further development of respective cells, and ultimately of plant as a whole.
The (dynamic and environment responsive) pattern of auxin distribution within the plant is a key factor for plant growth, its reaction to its environment, and specifically for development of plant organs (such as leaves or flowers). It is achieved through very complex and well coordinated active transport of auxin molecules from cell to cell throughout the plant body — by the so-called polar auxin transport. Thus, a plant can (as a whole) react to external conditions and adjust to them, without requiring a nervous system. Auxins typically act in concert with, or in opposition to, other plant hormones. For example, the ratio of auxin to cytokinin in certain plant tissues determines initiation of root versus shoot buds.
On the molecular level, all auxins are compounds with an aromatic ring and a carboxylic acid group. The most important member of the auxin family is indole-3-acetic acid (IAA). IAA generates the majority of auxin effects in intact plants, and is the most potent native auxin. And as native auxin, its stability is controlled in many ways in plants, from synthesis, through possible conjugation to degradation of its molecules, always according to the requirements of the situation. However, molecules of IAA are chemically labile in aqueous solution, so it is not used commercially as a plant growth regulator.
- The four naturally occurring (endogenous) auxins are IAA, 4-chloroindole-3-acetic acid, phenylacetic acid and indole-3-butyric acid; only these four were found to be synthesized by plants. However, most of the knowledge described so far in auxin biology and as described in the article below, apply basically to IAA; the other three endogenous auxins seems to have rather marginal importance for intact plants in natural environments. Alongside endogenous auxins, scientists and manufacturers have developed many synthetic compounds with auxinic activity.
- Synthetic auxin analogs include 1-naphthaleneacetic acid, 2,4-dichlorophenoxyacetic acid (2,4-D), and many others.
Some synthetic auxins, such as 2,4-D and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), are used also as herbicides. Broad-leaf plants (dicots), such as dandelions, are much more susceptible to auxins than narrow-leaf plants (monocots) such as grasses and cereal crops, so these synthetic auxins are valuable as synthetic herbicides.
Auxins are also often used to promote initiation of adventitious roots, and are the active ingredient of the commercial preparations used in horticulture to root stem cuttings. They can also be used to promote uniform flowering and fruit set, and to prevent premature fruit drop.
Discovery of auxin
In 1881, Charles Darwin and his son Francis performed experiments on coleoptiles, the sheaths enclosing young leaves in germinating grass seedlings. The experiment exposed the coleoptile to light from a unidirectional source and observed that they bend towards the light. By covering various parts of the coleoptiles with a light impermeable opaque cap, the Darwins discovered that light is detected by the coleoptile tip, but that bending occurs in the hypocotyl. However the seedlings showed no signs of development towards light if the tip was covered with an opaque cap, or if the tip was removed. The Darwins concluded that the tip of the coleoptile was responsible for sensing light, and proposed that a messenger is transmitted in a downward direction from the tip of the coleoptile, causing it to bend.
In 1913 a Danish scientist named Peter Boysen-Jensen demonstrated that the signal was not transfixed but mobile. He separated the tip from the remainder of the coleoptile by a cube of gelatine which prevented cellular contact but, allowed chemicals to pass through. The seedlings responded normally bending towards the light. However when the tip was separated by an impermeable substance, there was no curvature of the stem.
In 1926, the Dutch botanist Frits Warmolt Went showed that a chemical messenger diffuses from coleoptile tips. Went's experiment identified how a growth promoting chemical causes a coleoptile to grow towards light. Went cut the tips of the coleoptiles and placed them in the dark, putting a few tips on agar blocks that he predicted would absorb the growth-promoting chemical. On control coleoptiles, he placed a block that lacked the chemical. On others, he placed blocks containing the chemical, either centred on top of the coleoptile to distribute the chemical evenly or offset to increase the concentration on one side. When the growth promoting chemical was distributed evenly the coleoptile grew straight. If the chemical was distributed unevenly, the coleoptile curved away from the side with the cube, as if growing towards light, even though it was grown in the dark. Went later proposed that the messenger substance is a growth-promoting hormone, which he named auxin, that becomes asymmetrically distributed in the bending region. Went concluded that auxin is at a higher concentration on the shaded side, promoting cell elongation, which results in a coleoptiles bending towards the light.
Auxins coordinate development at all levels in plants, from the cellular level, through organs, and ultimately to the whole plant.
Auxin molecules present in cells may trigger responses directly through stimulation or inhibition of the expression of sets of certain genes or by means independent of gene expression. Auxin transcriptionally activates four different families of early genes (aka primary response genes), so-called because the components required for the activation are preexisting, leading to a rapid response. The families are glutathione S-transferases, auxin homeostasis proteins like GH3, SAUR genes of currently unknown function, and the Aux/IAA repressors.
Aux/IAA, ARF, TIR1, SCF auxin regulatory pathways
The Aux/IAA repressors provide an example of one of the pathways leading to auxin induced changes of gene expression. This pathway involves the protein families TIR1 (transport inhibitor response1), ARF (auxin response factor), Aux/IAA transcriptional repressors, and the ubiquitin ligase complex that is a part of the ubiquitin-proteasome protein degradation pathway. ARF proteins have DNA binding domains and can bind promoter regions of genes and activate or repress gene expression. Aux/IAA proteins can bind ARF proteins sitting on gene promoters and prevent them from doing their job. TIR1 proteins are F-box proteins that have three different domains giving them the ability to bind to three different ligands: an SCFTIR1 ubiquitin ligase complex (using the F-box domain), auxin (so TIR1 proteins are auxin receptors), and Aux/IAA proteins (via a degron domain). Upon binding of auxin, a TIR1 protein's degron domain has increased affinity for Aux/IAA repressor proteins, which when bound to TIR1 and its SCF complex undergo ubiquitination and subsequent degradation by a proteasome. The degradation of Aux/IAA proteins frees ARF proteins to activate or repress genes at whose promoters they are bound.
Within a plant, elaboration of the Aux/IAA repressor pathway takes place via diversification of the TIR1, ARF, and Aux/IAA protein families. Each family may contain many similar-acting proteins, differing in qualities such as degree of affinity for partner proteins, amount of activation or repression of target gene transcription, or domains of expression (e.g. different plant tissues might express different members of the family, or different environmental stresses might activate expression of different members). Such elaboration permits the plant to use auxin in a variety of ways depending on the needs of the tissue and plant.
Other auxin regulatory pathways
Another protein, auxin-binding protein 1 (ABP1), is a putative receptor for a different signaling pathway, but its role is as yet unclear. Electrophysiological experiments with protoplasts and anti-ABP1 antibodies suggest ABP1 may have a function at the plasma membrane, and cells can possibly use ABP1 proteins to respond to auxin through means faster and independent of gene expression.
On a cellular level
On the cellular level, auxin is essential for cell growth, affecting both cell division and cellular expansion. Auxin concentration level, together with other local factors, contributes to cell differentiation and specification of the cell fate.
Depending on the specific tissue, auxin may promote axial elongation (as in shoots), lateral expansion (as in root swelling), or isodiametric expansion (as in fruit growth). In some cases (coleoptile growth), auxin-promoted cellular expansion occurs in the absence of cell division. In other cases, auxin-promoted cell division and cell expansion may be closely sequenced within the same tissue (root initiation, fruit growth). In a living plant, auxins and other plant hormones nearly always appear to interact to determine patterns of plant development.
Growth of cells contributes to the plant's size, unevenly localized growth produces bending, turning and directionalization of organs- for example, stems turning toward light sources (phototropism), roots growing in response to gravity (gravitropism), and other tropisms originated because cells on one side grow faster than the cells on the other side of the organ. So, precise control of auxin distribution between different cells has paramount importance to the resulting form of plant growth and organization.
Auxin transport and the uneven distribution of auxin
To cause growth in the required domains, auxins must of necessity be active preferentially in them. Auxins are not synthesized in all cells (even if cells retain the potential ability to do so, only under specific conditions will auxin synthesis be activated in them). For that purpose, auxins have to be not only translocated toward those sites where they are needed, but also they must have an established mechanism to detect those sites. For that purpose, auxins have to be translocated toward those sites where they are needed. Translocation is driven throughout the plant body, primarily from peaks of shoots to peaks of roots (from up to down).
For long distances, relocation occurs via the stream of fluid in phloem vessels, but, for short-distance transport, a unique system of coordinated polar transport directly from cell to cell is exploited. This short-distance, active transport exhibits some morphogenetic properties.
This process, polar auxin transport, is directional, very strictly regulated, and based in uneven distribution of auxin efflux carriers on the plasma membrane, which send auxins in the proper direction. Pin-formed (PIN) proteins are vital in transporting auxin.
The regulation of PIN protein localisation in a cell determines the direction of auxin transport from cell, and concentrated effort of many cells creates peaks of auxin, or auxin maxima (regions having cells with higher auxin - a maximum). Proper and timely auxin maxima within developing roots and shoots are necessary to organise the development of the organ. Surrounding auxin maxima are cells with low auxin troughs, or auxin minima. For example, in the Arabidopsis fruit, auxin minima have been shown to be important for its tissue development.
Organization of the plant
As auxins contribute to organ shaping, they are also fundamentally required for proper development of the plant itself. Without hormonal regulation and organization, plants would be merely proliferating heaps of similar cells. Auxin employment begins in the embryo of the plant, where directional distribution of auxin ushers in subsequent growth and development of primary growth poles, then forms buds of future organs. Next, it helps to coordinate proper development of the arising organs, such as roots, cotyledons and leaves and mediates long distance signals between them, contributing so to the overall architecture of the plant. Throughout the plant's life, auxin helps the plant maintain the polarity of growth, and actually "recognize" where it has its branches (or any organ) connected.
An important principle of plant organization based upon auxin distribution is apical dominance, which means the auxin produced by the apical bud (or growing tip) diffuses (and is transported) downwards and inhibits the development of ulterior lateral bud growth, which would otherwise compete with the apical tip for light and nutrients. Removing the apical tip and its suppressively acting auxin allows the lower dormant lateral buds to develop, and the buds between the leaf stalk and stem produce new shoots which compete to become the lead growth. The process is actually quite complex, because auxin transported downwards from the lead shoot tip has to interact with several other plant hormones (such as strigolactones or cytokinins) in the process on various positions along the growth axis in plant body to achieve this phenomenon. This plant behavior is used in pruning by horticulturists.
Finally, the sum of auxin arriving from stems to roots influences the degree of root growth. If shoot tips are removed, the plant does not react just by outgrowth of lateral buds — which are supposed to replace to original lead. It also follows that smaller amount of auxin arriving to the roots results in slower growth of roots and the nutrients are subsequently in higher degree invested in the upper part of the plant, which hence starts to grow faster.
Auxin participates in phototropism, geotropism, hydrotropism and other developmental changes. The uneven distribution of auxin, due to environmental cues, such as unidirectional light or gravity force, results in uneven plant tissue growth, and generally, auxin governs the form and shape of plant body, direction and strength of growth of all organs, and their mutual interaction.
Auxin stimulates cell elongation by stimulating wall-loosening factors, such as elastins, to loosen cell walls. The effect is stronger if gibberellins are also present. Auxin also stimulates cell division if cytokinins are present. When auxin and cytokinin are applied to callus, rooting can be generated if the auxin concentration is higher than cytokinin concentration. Xylem tissues can be generated when the auxin concentration is equal to the cytokinins.
Auxin also induces sugar and mineral accumulation at the site of application.
Root growth and development
Auxins promote root initiation. Auxin induces both growth of pre-existing roots and adventitious root formation, i.e., branching of the roots. As more native auxin is transported down the stem to the roots, the overall development of the roots is stimulated. If the source of auxin is removed, for example the tips of stems are trimmed, the roots are less stimulated accordingly, and growth of stem is supported instead.
In horticulture, auxins, especially NAA and IBA, are commonly applied to stimulate root initiation when rooting cuttings of plants. However, high concentrations of auxin inhibit root elongation and instead enhance adventitious root formation. Removal of the root tip can lead to inhibition of secondary root formation.
Auxin induces shoot apical dominance; the axillary buds are inhibited by auxin, as a high concentration of auxin directly stimulates ethylene synthesis in lateral buds, causing inhibition of their growth and potentiation of apical dominance. When the apex of the plant is removed, the inhibitory effect is removed and the growth of lateral buds is enhanced. Auxin is sent to the part of the plant facing light, and this promotes growth towards that direction.
Fruit growth and development
Auxin is required for fruit growth and development and delays fruit senescence. When seeds are removed from strawberries, fruit growth is stopped; exogenous auxin stimulates the growth in fruits with seeds removed. For fruit with unfertilized seeds, exogenous auxin results in parthenocarpy ("virgin-fruit" growth).
Fruits form abnormal morphologies when auxin transport is disturbed. In Arabidopsis fruits, auxin controls the release of seeds from the fruit (pod). The valve margins are a specialised tissue in pods that regulates when pod will open (dehiscence). Auxin must be removed from the valve margin cells to allow the valve margins to form. This process requires modification of the auxin transporters (PIN proteins).
Auxin plays also a minor role in the initiation of flowering and development of reproductive organs. In low concentrations, it can delay the senescence of flowers. A number of plant mutants have been described that affect flowering and have deficiencies in either auxin synthesis or transport. In maize, one example is bif2 barren inflorescence2.
In low concentrations, auxin can inhibit ethylene formation and transport of precursor in plants; however, high concentrations can induce the synthesis of ethylene. Therefore, the high concentration can induce femaleness of flowers in some species.
Auxin inhibits abscission prior to formation of abscission layer, and thus inhibits senescence of leaves.
In the course of research on auxin biology, many compounds with noticeable auxin activity were synthesized. Many of them had been found to have economical potential for man-controlled growth and development of plants in agronomy. Synthetic auxins include the following compounds:
2,4-Dichlorophenoxyacetic acid (2,4-D); active herbicide and main auxin in laboratory use
α-Naphthalene acetic acid (α-NAA); often part of commercial rooting powders
2-Methoxy-3,6-dichlorobenzoic acid (dicamba); active herbicide
4-Amino-3,5,6-trichloropicolinic acid (tordon or picloram); active herbicide
2,4,5-Trichlorophenoxyacetic acid (2,4,5-T)
Auxins are toxic to plants in large concentrations; they are most toxic to dicots and less so to monocots. Because of this property, synthetic auxin herbicides, including 2,4-D and 2,4,5-T, have been developed and used for weed control.
However, synthetic auxins, especially 1-naphthaleneacetic acid (NAA) and indole-3-butyric acid (IBA), are also commonly applied to stimulate root growth when taking cuttings of plants or for different agricultural purposes such as the prevention of fruit drop in orchards.
Used in high doses, auxin stimulates the production of ethylene. Excess ethylene (also native plant hormone) can inhibit elongation growth, cause leaves to fall (abscission), and even kill the plant. Some synthetic auxins, such as 2,4-D and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) were marketed also as herbicides. Dicots, such as dandelions, are much more susceptible to auxins than monocots, such as grasses and cereal crops. So, these synthetic auxins are valuable as synthetic herbicides. 2,4-D was the first widely used herbicide, and it is still so. 2,4-D was first commercialized by the Sherwin-Williams company, and saw use in the late 1940s. It is easy and inexpensive to manufacture.
- Herbicide manufacture
The defoliant Agent Orange, used extensively by British forces in the Malayan Emergency and American forces in the Vietnam War, was a mix of 2,4-D and 2,4,5-T. The compound 2,4-D is still in use and is thought to be safe, but 2,4,5-T was more or less banned by the U.S. Environmental Protection Agency in 1979. The dioxin TCDD is an unavoidable contaminant produced in the manufacture of 2,4,5-T. As a result of the integral dioxin contamination, 2,4,5-T has been implicated in leukemia, miscarriages, birth defects, liver damage, and other diseases.
- Auxins page on www.plant-hormones.info — a website set up at Long Ashton Research Station an institute of the BBSRC (site is now on independent server).
- Simon, S; Petrášek, P (2011). "Why plants need more than one type of auxin". Plant Science 180 (3): 454–460. doi:10.1016/j.plantsci.2010.12.007. PMID 21421392.
- Taiz, L.; Zeiger, E. (1998). Plant Physiology (2nd ed.). Massachusetts: Sinauer Associates.
- Friml J (February 2003). "Auxin transport — shaping the plant". Current Opinion in Plant Biology 6 (1): 7–12. doi:10.1016/S1369526602000031. PMID 12495745.
- Benková E, Michniewicz M, Sauer M, et al. (November 2003). "Local, efflux-dependent auxin gradients as a common module for plant organ formation". Cell 115 (5): 591–602. doi:10.1016/S0092-8674(03)00924-3. PMID 14651850.
- Hohm, T; Preuten, T; Fankhauser, C (2013). "Phototropism: Translating light into directional growth". American journal of botany 100 (1): 47–59. doi:10.3732/ajb.1200299. PMID 23152332.
- Whippo, CW; Hangarter, RP (2006). "Phototropism: Bending towards enlightenment". The Plant cell 18 (5): 1110–9. doi:10.1105/tpc.105.039669. PMC 1456868. PMID 16670442.
- Mendipweb Nature of auxin
- Hardtke CS (November 2007). "Transcriptional auxin-brassinosteroid crosstalk: who's talking?". BioEssays 29 (11): 1115–23. doi:10.1002/bies.20653. PMID 17935219.
- Jungmook K; Harter, K; Theologis, A (October 1997). "Protein–protein interactions among the Aux/IAAproteins". Proceedings of the National Academy of Sciences of the United States of America 94 (22): 11786–91. doi:10.1073/pnas.94.22.11786. PMC 23574. PMID 9342315.
- Abel S and Theologis A (May 1996). "Early Genes and Auxin Action". Plant Physiol. 1996 (111): 9–17. doi:10.1104/pp.111.1.9. PMC 157808. PMID 8685277.
- Wang, S; Hagen, G; Guilfoyle, TJ (2013). "ARF-Aux/IAA interactions through domain III/IV are not strictly required for auxin-responsive gene expression". Plant signaling & behavior 8 (6): e24526. doi:10.4161/psb.24526. PMID 23603958.
- Vanneste, S; Friml, J (2012). "Plant signaling: Deconstructing auxin sensing". Nature chemical biology 8 (5): 415–6. doi:10.1038/nchembio.943. PMID 22510662.
- Dharmasiri N, Dharmasiri S, Estelle M (May 2005). "The F-box protein TIR1 is an auxin receptor". Nature 435 (7041): 441–5. doi:10.1038/nature03543. PMID 15917797.
- Delker C, Raschke A, Quint M (April 2008). "Auxin dynamics: the dazzling complexity of a small molecule's message". Planta 227 (5): 929–41. doi:10.1007/s00425-008-0710-8. PMID 18299888.
- Petrásek J, Mravec J, Bouchard R, et al. (May 2006). "PIN proteins perform a rate-limiting function in cellular auxin efflux". Science 312 (5775): 914–8. doi:10.1126/science.1123542. PMID 16601150.
- Sabatini S, Beis D, Wolkenfelt H, et al. (November 1999). "An auxin-dependent distal organizer of pattern and polarity in the Arabidopsis root". Cell 99 (5): 463–72. doi:10.1016/S0092-8674(00)81535-4. PMID 10589675.
- Heisler MG, Ohno C, Das P, et al. (November 2005). "Patterns of auxin transport and gene expression during primordium development revealed by live imaging of the Arabidopsis inflorescence meristem". Curr. Biol. 15 (21): 1899–911. doi:10.1016/j.cub.2005.09.052. PMID 16271866.
- Sorefan K, Girin T, Liljegren SJ, et al. (May 2009). "A regulated auxin minimum is required for seed dispersal in Arabidopsis". Nature 459 (7246): 583–6. doi:10.1038/nature07875. PMID 19478783.
- Chambers. Science and Technology Dictionary. ISBN 978-0-550-14110-1.
- Jiří Friml Lab (2012). That is why plants grow towards the light! VIB (the Flanders Institute for Biotechnology). http://www.vib.be/en/news/Pages/That-is-why-plants-grow-towards-the-light!.aspx
- Nemhauser JL, Feldman LJ, Zambryski PC (September 2000). "Auxin and ETTIN in Arabidopsis gynoecium morphogenesis". Development 127 (18): 3877–88. PMID 10952886.
- McSteen, P; Malcomber, S; Skirpan, A; Lunde, C; Wu, X; Kellogg, E; Hake, S (June 2007). "barren inflorescence2 Encodes a co-ortholog of the PINOID serine/threonine kinase and is required for organogenesis during inflorescence and vegetative development in maize". Plant physiology 144 (2): 1000–11. doi:10.1104/pp.107.098558. PMC 1914211. PMID 17449648.
- The Industry Task Force II on 2,4-D Research Data |
Most of you have probably learnt at school that radioactive isotopes are sources of radioactive emissions. Some radiate alpha, some beta, and some also generate X-ray and gamma rays, but no one ever told you that these isotopes are NOT the sources of such radiations but simply energy converters of the incoming planck frequency band radiation making up all matter, the same energy responsible for gravity. Most matter is not dense enough to create any noticable downshifting of the incoming radiation frequency (energy), hence we are not able to detect either incoming or outgoing radiation for most common substances. However, we notice that for mass numbers above that of lead, the high density of matter results in noticable frequency downshifting of this incoming energy, and we start to detect radiation in the highest bands of our presently known spectrum. The more dense is the substance, the more of incoming energy is trapped within its standing wave structure, and the less energetic is the outgoing/reflected energy. The less energetic the outgoing energy, the lower is its frequency, well, low enough for us to be able to detect them in the upper part of the known electromagnetic spectrum. In fact if one tries to slightly shield a gamma source with some aluminium foil, the aluminium foil will act as a downshifting device and generate X-rays. Yes, X-rays will be emitted from the other side of the aluminium foil, but no body ever says that aluminium generates X-rays. You can now finally understand why a radioactive isotope is simply tapping or downconverting this sea of energy of free space (ZPE) and not generating any radiation by itself.
Here I shall quote a very interesting and relevant statement written by Tesla, dated 10th July 1937. He says:
"There is no energy in matter other than that received from the environment. It applies rigorously to molecules and atoms as well as the largest heavenly bodies and to all matter in the universe in any pahse of its existence from its very formation to its ultimate disintegration."
RTGs It is a known fact that radioactive isotopes like Plutonium produce heat. This fact has already been exploited within RTG's. RTG's have proven their safety and capability in many space missions, including human missions. Radioactive material (plutonium 238) is used to produce heat, which is converted to electricity either by thermoelectric devices, such as peltiers and thermocouples, or by thermionic effect. When a material gets very hot (such as the hot filament in a television cathode ray tube), it can emit electrons from its surface. In a thermionic RTG, this electron emission is a direct source of electrical current. The Plutonium is not placed as a pure form in the RTG, but is installed as bricks of plutonium dioxide (PuO2), a ceramic which, if shattered, breaks into large pieces rather than smaller, more dangerous dust. The plutonium dioxide is encased in layers of materials, including graphite blocks and layers of iridium. Both materials are strong and heat resistant, which protect the plutonium bricks in the event of a launch explosion.
The RTG uses only decay heat, meaning there are no nuclear reactions involved, and also that the radioactive material can be encapsulated to prevent release into the atmosphere. As long as the capsule is not tampered with, an RTG is the nearest thing to a clean free energy device, directly converting free space energy to heat and electricity.
The above diagram shows the RTG used on board the Cassini, a NASA space probe still in operation. An RTG is fuelled with about 10.9 kilograms of plutonium dioxide (a ceramic form that is primarily composed of the plutonium-238 isotope) and can initially generate about 280 Watts of electrical power, and after ten years still be able to generate about 230 Watts of electrical power. Half-life time is 87 years. The outer shell is basically a heatsink, in contact with the cold side of the Si-Ge unicouple array. Two RTGs are needed to generate the 400 Watts of power that the Cassini orbiter needs. |
|Publications: SRL: EduQuakes|
Online Material: Six supplemental movies
The magnitude 9.0 Tohoku-Oki, Japan, earthquake on 11 March 2011 is the largest earthquake to date in Japan’s modern history and is ranked as the fourth largest earthquake in the world since 1900. This earthquake occurred within the northeast Japan subduction zone (Figure 1), where the Pacific plate is subducting beneath the Okhotsk plate at rate of ~8–9 cm/yr (DeMets et al. 2010). This type of extremely large earthquake within a subduction zone is generally termed a “megathrust” earthquake. Strong shaking from this magnitude 9 earthquake engulfed the entire Japanese Islands, reaching a maximum acceleration ~3 times that of gravity (3 g). Two days prior to the main event, a foreshock sequence occurred, including one earthquake of magnitude 7.2. Following the main event, numerous aftershocks occurred around the main slip region; the largest of these was magnitude 7.9. The entire foreshocks-mainshock-aftershocks sequence was well recorded by thousands of sensitive seismometers and geodetic instruments across Japan, resulting in the best-recorded megathrust earthquake in history. This devastating earthquake resulted in significant damage and high death tolls caused primarily by the associated large tsunami. This tsunami reached heights of more than 30 m, and inundation propagated inland more than 5 km from the Pacific coast, which also caused a nuclear crisis that is still affecting people’s lives in certain regions of Japan.
As seismologists, it is important that we effectively convey information about catastrophic earthquakes, like this recent Japan event, to others who may not necessarily be well versed in the language and methods of earthquake seismology. Until recently, it was typical to only use “snapshot” static images to represent earthquake data. From these static images alone it was often difficult to explain even the basic characteristics of seismic waves generated by earthquakes, such as primary (P), secondary (S), and surface waves. More advanced aspects of the seismic waves, such as frequency content, attenuation, site effects, and phenomena such as earthquake triggering were even more difficult to explain. This was especially true for general audiences, such as high school students, who do not have prior knowledge of basic seismology. Recently, animations and visualizations have been increasingly used to present information about earthquakes and how seismic waves propagate inside the Earth (e.g., http://www.iris.edu/hq/programs/education_and_outreach/ visualizations). However, most of these animations do not take advantage of people’s ability to learn through sound cues such as amplitude, pitch, and frequency. Here, we take an alternative approach and convert seismic data into sounds, a concept known as “audification” or continuous “sonification” (e.g., Walker and Nees 2011).
By combining seismic auditory and visual information, static “snapshots” of earthquake data come to life, allowing the viewer to hear pitch and amplitude changes in sync with viewed frequency changes in the earthquake seismograms. In addition, this approach allows the audience to relate seismic signals generated by earthquakes to familiar sounds such as thunder, popcorn popping, rattlesnakes, gunshots, firecrackers, etc. Thus, audification of seismic data can be an effective tool to convey useful information about earthquake recordings, seismic wave propagation inside the Earth, and interaction of seismic events with one another.
The concept of audification in seismology has existed for several decades (Benioff 1953; Speeth 1961; Hayward 1994; Dombois and Eckel 2011) and was recently brought up in several conference proceedings (Simpson 2005; Simpson et al. 2009; Fisher et al. 2010). The audible frequency range for human hearing is roughly 20 Hz–20 kHz, which is on the high end of the frequency range for earthquake signals recorded by modern seismometers (~100 s or 0.01 Hz –~100 Hz). The easiest way to make seismic data audible is to play it much faster than true speed, also known as time-compression or speed-up (e.g., Hayward 1994; Dombois and Eckel 2011). Doing so allows a direct mapping of the seismic frequency range into the audible frequency range. In addition, with time compression it takes much less time to play the resulting sound track, so the audience can hear seismic signals that typically occur over a few hours in a matter of about a minute. In a companion paper, Kilb et al. (2012, this issue’s Electronic Seismologist column) demonstrate how to convert seismic data into sounds and movies using simple tools such as MATLAB and Apple Inc.’s QuickTime Pro. Here we show a few examples generated by those tools using data from in the north-south direction at station MYG004. In addition the 2011 Tohoku-Oki earthquake to help convey important to showing the original data, we apply a 20-Hz high-pass filinformation about the Japan earthquake to general audiences.
The Mw 9.0 Tohoku-Oki, Japan, earthquake was recorded by two of the world’s densest strong-motion seismic networks, namely the K-net and the KiK-net. The K-net consists of more than 1,000 strong-motion seismographs at the ground surface, while the KiK-net consists of ~700 stations with a surface/ downhole pair of strong-motion seismographs (Okada et al. 2004). Both networks recorded the mainshock ground accelerations, and the highest acceleration of nearly 3 g was recorded at one K-net station (station code MYG004) near the mainshock epicenter. In this example we use ground acceleration in the north-south direction at station MYG004. In addition to showing the original data, we apply a 20-Hz high-pass filter to remove the low-frequency content in order to show the high-frequency signals in the data (Peng et al. 2007). We also include a spectrogram, which is a colorful image to show how the spectral content of a signal varies with time. Generally the horizontal axis in a spectrogram plot represents time and the vertical axis is frequency, and intensity or color of each point corresponds to the amplitude or energy of a certain frequency at a particular time.
As shown in Figure 2, the recorded data mainly consist of two groups of ground motion. It takes ~50 seconds for the seismic data to reach initial peak amplitude. After that, there is a brief amplitude decrease, and then at ~90 seconds, the second and the highest peak amplitude is reached. These strong high-frequency signals end at around 180 s. This double-peaked signature in ground motion amplitudes is also observed at many nearby seismic stations, suggesting at least two patches of high-frequency radiation from the mainshock rupture (Ide et al. 2011; Suzuki et al. 2011).
The corresponding sound and video of the N-S component data (speeded up by a factor of 50) is shown in the online supplementary movie 1. From the sound component of this movie we can hear two groups of loud noises, consistent with what is shown in the static image (Figure 2). We note that the highest ground acceleration occurred ~90 s after the mainshock rupture began. The frequency content during this time period is much higher, and the corresponding sound has much higher pitch as compared with other times. For comparison, we show the animation of the N-S component recorded at a nearby station MYG003 (online supplementary movie 2). Because the ultra high-frequency signal at ~90 s was not recorded at station MYG003 and other nearby stations, we hypothesize that such high-frequency radiation might be generated by a very shallow seismic event immediately beneath station MYG004 (Fischer et al. 2008; Sleep and Ma 2008). Alternatively, this could be caused by local site effects or topographic amplifications (Skarlatoudis and Papazachos 2012). We plan to investigate this further in a follow-up study. The purpose of this example is to demonstrate variable strong ground motions produced by the mainshock.
Large shallow earthquakes are typically followed by a significant increase in seismic activity near the mainshock rupture that is generally termed “aftershock.” The Tohoku-Oki earthquake triggered numerous aftershocks that were recorded on scale by many nearby instruments. Figure 3 shows the seismic data recorded on the vertical component of the Hi-net borehole station HTAH starting 100 s before and ending one hour after the occurrence time of the mainshock. Hi-net is a high-sensitivity seismograph network that records velocity motions with ~800 stations mostly placed in the same boreholes as the KiK-net stations at a typical depth of 100 to 200 m (Okada et al. 2004). The raw seismic data clearly show the mainshock and some large aftershock signals (Figure 3A). In addition, we apply a 20-Hz high-pass filter to remove low-frequency content and compute an envelope function, which is a curve that captures the overall shape of the signal. We finally take base-10 logarithm to show both strong and weak signals. The resulting envelope function (Figure 3B) and the spectrogram (Figure 3C) clearly mark many high-frequency bursts, which are primarily generated by aftershocks immediately following the Tohoku-Oki mainshock.
In the corresponding sound and video (online supplementary movie 3), the aftershock signals sound like a rapid-fire gunshot or “pop-it” firecrackers that have been thrown on the ground. In this case, we speed up the seismic data 100 times to reduce the playback time. The resulting sound contains higher pitch than in the previous examples when it was speeded up only 50 times. In addition, we also set the maximum amplitude to be 1/50 of the peak mainshock value so that the weak aftershock sounds can be better heard. In this case, the mainshock signal is clipped or distorted so the resulting sound is noisier. We also mark with a black line the occurrence time of each aftershock around the mainshock rupture region (Figure 1) listed in the Japan Meteorological Agency (JMA) earthquake catalog. From the absence of an aftershock marker for some vertical peaks (i.e., aftershocks) in the envelope function it is evident that some aftershocks are not listed in the catalog. A systematic analysis of those missing aftershocks is currently underway (Lengliné et al. 2011) for better understanding the transition from mainshock to aftershocks and the physical mechanisms that control aftershock generation around the mainshock slip region (Enescu et al. 2007; Kilb et al. 2007; Peng et al. 2007).
Recent studies have shown that large earthquakes could trigger or cause additional small earthquakes and deep tremor activities at several hundred to thousands of kilometers away (e.g., Peng and Gomberg 2010). Similarly, the Tohoku-Oki mainshock also triggered numerous earthquakes and tremor around the world (Peng et al. 2011). The major difference between tremor and earthquake signals is that tremor occurs at greater depths and ruptures with slower speed than regular earthquakes (Peng and Gomberg 2010). Hence a tremor’s frequency content is lower, and the corresponding sound has lower pitch, than that of a regular earthquake. In addition, tremor often occurs in groups and hence sounds less distinct than earthquake swarms or aftershocks. Here we show two examples of triggered tremor activities recorded by the broadband station PKD along the Parkfield-Cholame section of the San Andreas fault in central California, which are ~8,000 kilometers away from the mainshock event.
Seismic data recorded at station PKD, which is part of the Berkeley Digital Seismic Network (BDSN), contains both the long-period signals of the distant Japan earthquake and high-frequency triggered tremor signals that occurred locally beneath the San Andreas fault (Figure 4). The tremor signals began right at the time when the S wave from the Japan earthquake arrived at Parkfield and became further intensified and modulated by the long-period surface waves. The associated sound and video (online supplementary movie 4) contains a loud low-pitch noise (like thunder) corresponding to the arrival of the mainshock P wave, followed by a high-pitch sound (like rainfall) that turns on and off frequently and corresponds well with the long-period seismic signals from the distant event. The latter is associated with the deep tremor signals that were triggered/modulated by the S wave and surface wave signals (Peng et al. 2009). For comparison, online supplementary movie 5 presents the sound and video generated from data at another broadband station MHC ~180 km north near the Calaveras fault in northern California. It contains the same thunder-like P-wave sound at the beginning, but without the follow-up rainfall-like tremor sound during the S and surface waves. Two local earthquakes with sounds similar to gunshots occurred at ~2,900 s after the mainshock origin time. Such a comparison clearly demonstrates that the high-pitch sound at station PKD during the S and surface waves is unusual and likely produced by local fault movement triggered by the distant earthquake.
In the next example, we include a cross-sectional map of the tremor locations (Shelly and Hardebeck 2010) along the San Andreas fault, together with the seismic waveform data (Figure 5 and online supplementary movie 6). In this case, the location of each tremor “lights up” as a deep red circle when it first occurs and then fades in color and shrinks in size to help the viewer track possible migration of tremor activity (Shelly 2010; Shelly et al. 2011). This visualization shows that the tremor during the mainshock S wave first occurred in the NW direction around the creeping section of the San Andreas fault. Most of the tremor, however, occurred around Cholame, in the SE direction, during the subsequent surface waves. Both this and previous examples clearly demonstrate how distant earth-quakes like the Tohoku-Oki event can trigger deep fault movement thousands of kilometers away.
Additional images, sounds, and videos created from the seismic data generated by the 2011 Tohoku-Oki mainshock can be found online at http://geophysics.eas.gatech.edu/people/zpeng/ Japan_20110311/. This allows anyone to access the data products freely. In addition, the Web site contains a link to download a MATLAB script “sac2wav.m” that can convert seismic data in the Seismic Analysis Code (SAC) format into the WAVE format. This open-source code provides those interested a chance to play with the data and create their own sound files. The detailed procedures to create the full video/sound files are described in the companion paper to this work (see Kilb et al. 2012, this issue’s Electronic Seismologist column).
The Web site was initially created on 6 April 2011 and has been modified several times since then. It was first presented at an online seminar series called “Teaching Geophysics in the 21st Century: Visualizing Seismic Waves for Teaching and Research” (http://serc.carleton.edu/ NAGTWorkshops/geophysics/seismic11/index.html), and later at the Seismological Society of America 2011 annual meeting. A link to this Web site has been included in several Education and Outreach (E&O) Web sites such as the Incorporated Research Institutions for Seismology (IRIS)’s special event Web site (http://www.iris.edu/news/events/japan2011/), allowing it to be reached by a wide audience.
The examples we present here and on our Web site are mostly used to demonstrate how the Tohoku-Oki mainshock triggered additional seismic events in the immediate vicinity (as aftershocks) and at large distances (such as tremor in Parkfield, CA). In these examples sound (pitch and amplitude) is related to the frequency spectrum and amplitude of seismograms, although depending on the research goals additional mapping could be defined. For example, the sound could be mapped to earthquake depths, mainshock/aftershock back-azimuths, or aftershock magnitudes (see for example http://pods.binghamton.edu/~ajones/#Seismic-Eruptions).In addition, similar products can be generated to illustrate how seismic waves propagate inside the Earth, and how different sites (solid rocks versus soft soils) could affect the amplitude and frequency content of the surface shaking (Michael 1997). We envision that audification of seismic data could be increasingly used to convey information to general audiences about recent earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc.). Furthermore, we hope that sharing a new visualization tool will foster an interest in seismology not just for young scientists but also for people of all ages.
Seismic data used in this study are downloaded from the Northern California Earthquake Data Center (NCEDC) and the National Research Institute for Earth Science and Disaster Prevention (NIED) Data Center in Japan. We thank the Japan Meteorological Agency (JMA) for its earthquake catalog. This manuscript benefitted from useful comments by Alan Kafka and Andrew Michael. This project is supported by the National Science Foundation (NSF) CAREER program EAR-0956051 to ZP and CA, and the IRIS sub-award 86-DMS funding 2011-3366 (DK). Any use of product, firm, or trade names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Any use of product, firm, or trade names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Benioff, H. (1953). Earthquakes around the world. On Out of This World, ed. E. Cook., side 2. Stamford, CT: Cook Laboratories, 5012 (LP record audio recording).
DeMets, C., R. G. Gordon, and D. F. Argus (2010). Geologically current plate motions. Geophysical Journal International 181, 1–80; doi:j.1365-246X.2009.04491.x.
Dombois, F., and G. Eckel (2011). Audification. In The Sonification Handbook, ed. T. Hermann, A. Hunt, and J. Neuhoff, 301–324. Berlin: Logos Publishing House http://sonification.de/handbook/index.php/chapters/chapter12/.
Enescu, B., J. Mori, and M. Miyazawa (2007). Quantifying early aftershock activity of the 2004 mid-Niigata Prefecture earthquake (Mw 6.6). Journal of Geophysical Research 112, B04310; doi:10.1029/2006JB004629.
Fischer, A., Z. Peng, and C. Sammis (2008). Dynamic triggering of high-frequency bursts by strong motions during the 2004 Parkfield earthquake sequence. Geophysical Research Letters 35, L12305; doi:10.1029/2008GL033905.
Fisher, M., Z. Peng, D. W. Simpson, and D. L. Kilb (2010). Hear it, see it, explore it: Visualizations and sonifications of seismic signals. Eos, Transactions, American Geophysical Union 91, Fall Meeting Supplement, Abstract ED41C-0654.
Hayward, C. (1994). Listening to the Earth sing. In Auditory Display: Sonification, Audification, and Auditory Interfaces, ed. G. Kramer, 369–404. Reading, MA: Addison-Wesley.
Ide, S., A. Baltay, and G. C. Beroza (2011). Shallow dynamic overshoot and energetic deep rupture in the 2011 Mw 9.0 Tohoku-Oki earth quake. Science 332, 1,426–1,429; doi:10.1126/science.1207020.
Kilb, D., V. G. Martynov, and F. L. Vernon (2007). Aftershock detection thresholds as a function of time: Results from the ANZA seismic network following the 31 October 2001 ML 5.1 Anza, California, earthquake. Bulletin of the Seismological Society of America 97 (3), 780–792; doi:10.1785/0120060116.
Kilb, D., Z. Peng, D. Simpson, A. Michael, and M. Fisher (2012). Listen, watch, learn: SeisSound video products. Seismological Research Letters 83, 281–286.
Lengliné, O., B. Enescu, Z. Peng, and K. Shiomi (2011). Unraveling the detailed aftershock sequence of the Mw = 9.0 2011 Tohoku mega-thrust earthquake through the application of matched filter tech niques. Abstract S13A-2252 presented at the 2011 Fall Meeting, America Geophysical Union, San Francisco, CA, December 5–9.
Michael, A. J. (1997). Listening to earthquakes. USGS; http://earthquake.usgs.gov/learn/listen/index.php.
Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004). Recent progress of seismic observation networks in Japan—Hi-net, F-net, K-NET and KiK-net. Earth Planets Space 56, xv–xxviii.
Peng, Z., K. Chao, C. Aiken, D. R. Shelly, D. P. Hill, C. Wu, B. Enescu, and A. Doran (2011). Remote triggering following the 2011 M 9.0 Tohoku, Japan earthquake. Seismological Research Letters 82, 461 (abstract).
Peng, Z., and J. Gomberg (2010). An integrated perspective of the continuum between earthquakes and slow-slip phenomena. Nature Geoscience 3, 599–607; doi:10.1038/ngeo940.
Peng, Z., L. T. Long, and P. Zhao (2011). The relevance of high-frequency analysis artifacts to remote triggering. Seismological Research Letters 82 (5), 654–660; doi:10.1785/gssrl.83.2.654.
Peng, Z., J. E. Vidale, M. Ishii, and A. Helmstetter (2007). Seismicity rate immediately before and after main shock rupture from high-frequency waveforms in Japan. Journal of Geophysical Research 112, B03306; doi:10.1029/2006JB004386.
Peng, Z., J. E. Vidale, A. Wech, R. M. Nadeau, and K. C. Creager (2009). Remote triggering of tremor along the San Andreas fault in cen tral California. Journal of Geophysical Research 114, B00A06; doi:10.1029/2008JB006049.
Shelly, D. (2010). Migrating tremors illuminate deformation beneath the seismogenic San Andreas fault. Nature 463, 648–652; doi:10.1038/nature0875.
Shelly, D., and J. Hardebeck (2010). Precise tremor source locations and amplitude variations along the lower-crustal central San Andreas fault. Geophysical Research Letters 37, L14301; doi:10.1029/2010GL043672.
Shelly, D., Z. Peng, D. Hill, and C. Aiken (2011). Triggered creep as a possible mechanism for delayed dynamic triggering of tremor and earthquakes. Nature Geoscience 4, 384–388; doi:10.1038/NGEO1141.
Simpson, D. W. (2005). Sonification of GSN data: Audio probing of the Earth. Seismological Research Letters 76 (2), 263 (abstract).
Simpson, D. W., Z. Peng, D. Kilb, and D. Rohrick (2009). Sonification of earthquake data: From wiggles to pops, booms and rumbles. Abstract D53E-08 presented at the 2009 Fall Meeting, American Geophysical Union, San Francisco, CA, December 14–18 (abstract).
Skarlatoudis A. A. and C. B. Papazachos (2011). Preliminary Study of Ground Motions of Tohoku, Japan, Earthquake of 11 March 2011: Assessing the Influence of Anelastic Attenuation and Rupture Directivity. Seismological Research Letters 83, 119–129.
Sleep, N., and S. Ma (2008). Production of brief extreme ground acceleration pulses by nonlinear mechanisms in the shallow subsurface. Geochemistry, Geophysics, Geosystems 9, Q03008; doi:10.1029/2007GC001863.
Speeth, S. D. (1961). Seismometer sounds. Journal of the Acoustical Society of America 33, 909–916.
Suzuki, W., S. Aoi, H. Sekiguchi, and T. Kunugi (2011). Rupture process of the 2011 Tohoku-Oki mega-thrust earthquake (M 9.0) inverted from strong-motion data. Geophysical Research Letters 38, L00G16; doi:10.1029/2011GL049136.
Walker, B. N., and M. A. Nees (2011). Theory of sonification. In The Sonification Handbook, ed. T. Hermann, A. Hunt, and J. Neuhoff, 9–39. Berlin, Germany: Logos Publishing House.
[ Back ]
Posted: 28 February 2012 |
Unless you have taken the Discrete Mathematics course here at NYU or studied the equivalent material elsewhere, you probably would not have had an occasion to study the set-theoretic formalism that serves as a foundation for much of modern mathematics. This isn’t to claim that you know nothing about set theory. The two prerequisite courses, linear algebra and vector calculus, cover more than enough examples of sets and functions. You have an idea of what sets are; you know how to take intersections, unions, differences, and complements of them; you know the basic properties of functions. Our goal here is to round out your set-theoretic vocabulary, so that you can explore the world of modern mathematics with ease. The approach we take in this post is the so-called naïve set theory. Think of this as a math analogue to the Writing the Essay course: we are aiming for a working knowledge of the language, so we can go ahead and study the Great Works. Indeed, our focus is on the language of set theory and its usage. The study of the inner workings of the language, while interesting in its own right, is not our concern here. In particular, we shall freely take certain facts for granted, instead of deriving them from first principles.
A set is a well-defined collection of mathematical objects, which we do not attempt to define. In other words, if we fix a set , every mathematical object should either be an element of or not. We write to denote that is an element of ; means is not an element of . We take for granted that a lot of sets exist. For example, we assume that the set of integers
exists without getting into a discussion about what the symbol “1” means. We also assume the existence of the empty set , which is the unique set that has no element. Often, we call upon an unspecified index set , which is an arbitrary set with the desired “size”. Typically, we begin by assuming the existence of a set of sets indexed by , viz., for each , we suppose that a set with the name exists. This is a generalization of indexing by natural numbers: indeed, if , then
Example 1.1. For each number in the set of real numbers , we let be the open interval . The collection is a set of subsets of , indexed by the real numbers.
We write to denote that is a subset of , i.e., implies . For example, . Observe that that the two sets and are equal if and only if and . Given a set , we often use the set-builder notation to define a subset of . For example,
is the subset of consisting precisely of the even integers. We do not attempt to discuss Russell’s paradox here. Given a set , we define the powerset of to be the set
We assume that the power set of an arbitrary set exists. Given two sets and , we define the union
and the set difference
We say that and are djsoint if . We assume that , , and exist regardless of our choice of and . The union operation and the intersection operation can be generalized to multiple sets. If are sets, then we define the union
and the intersection
The following exercise is strongly recommended for those without much experience with logical quantifiers:
Exercise 1.2. Show that
Formulate a similar statement for the intersection operation and prove it. Formulate similar statements for sets and prove them.
Even more generally, we can take an arbitrary collection of sets , labelled by an index set , and define the union
and the intersection
As above, we assume that unions and intersections of sets exist, regardless of our choice of sets. We say that is pairwise disjoint in case whenever .
2. Relations and Functions
How do we say that an element of a set and an element of another set are related by some mathematical rule? To answer this question, we take for granted the existence of ordered pairs: namely, if and are sets, and if and , then the mathematical object is well-defined and exists. The cartesian product of and is defined to be the set of all ordered pairs:
A binary relation from to is a subset of the cartesian product .
- is -related to * and write . For example, we let and define the less-than-or-equal-to relation
In this case, a real number is -related to another real number if and only if with the usual ordering.
In this case, a subset of is -related to another subset of if and only if .
Example 2.3. Fix a positive integer . We define the mod relation on by declaring that if and only if is an integer multiple of ; in other words, in case and have the same remainder when divided by . is then a relation.
A binary relation is said to be a function if
- for each , there exists an such that ;
- and imply that .
This is the formalization of the intuitive definition that a function is a rule that assigns a unique value to each element of a set. We write to denote that is a function from to . Given a function , we write to denote that is -related to . is the domain of the function , and is the codomain. The subset
of the codomain of is called the image of the function . Given a subset of , the set
is said to be the image of under . Similarly, given a subset of , the set
is said to be the preimage of under . We say that a function is
- injective if, for all , implies that ;
- surjective if ;
- bijective if is injective and surjective.
Bijective functions are invertible, in the following sense:
Proposition 2.4. If is a bijective function, then the relation defined by declaring if and only if is a function, called an inverse function of . If and are two inverse functions of , then . In light of this uniqueness property, we write to denote the unique inverse function of .
Proof. Let be an inverse function of . For each , the surjectivity of furnishes an such that . Moreover, if and , then , whence the injectivity of implies that . It follows that is a function. If is another inverse function of , then
whence , as was to be shown.
Example 2.5. The function defined by setting is neither injective nor surjective. The function defined by the same formula is injective but not surjective. The function defined by the same formula is bijective, and its inverse is given by the formula .
Given three sets , , and and two functions and , we define the function composition of and to be the function defined by setting for each . The following exercise is recommended for those without much experience dealing with injectivity and surjectivity in the context of function compositions.
Exercise 2.6. Let and be functions. Verify the following statements:
(1) is a function.
(2) If and are injective, then so is .
(3) If and are surjective, then so is .
(4) If and are bijective, then so is .
(5) If is bijective, then is injective and is surjective.
Let . The restriction of onto , denoted by , is the composition , where is the canonical injection map defined by the formula . By the above exercise, the restriction of an injective function is injective. Let Given a fixed , we can define a projection map by setting
We now assume that . The codomain restriction of by is the composition . We note that the codomain restriction is invariant of the choice of the point , the only points of that are mapped to are outside of .
3. Equivalence Relations and Quotient Sets
Often, there is a need for categorization via declaring different objects to be “the same”. For example, instead of posting “Happy Birthday, Harry!” on a residence hall bulletin board on September 4, only to take it down the next day and post “Happy Birthday, Elisha!” on the same board, a resident assistant may choose to post “Happy Birthday to All September Birthday People: Harry, Elisha, Tish, and Joe!” Similarly, it is impractical to create an analog watch that records the exact amount of time passed since the time of initial operation, and so “12 hours from now”, “24 hours from now”, and “36 hours from now” are represented in the same way. The process of equating different objects is done rigorously by introducing an equivalence relation, which is a binary relation on a set that satisfies the following three properties hold:
- reflexivity. for all ;
- symmetry. implies that ;
- transitivity. and imply that .
Exercise 3.1. Check that the mod relation defined in Example 2.3 is an equivalence relation.
The single most important property of an equivalence relation is that it determines a partition.
Definition 3.2. A partition of a nonempty set is a set of nonempty subsets of such that
and that implies either or .
Proposition 3.3. Let be a nonempty set. Let is an equivalence relation on . For each , we define the equivalence class of to be the set The collection then forms a partition of
Proof. Fix . By reflexivity, the equivalence classes and are nonempty. Suppose that there exists . Whenever and , it follows from symmetry and transitivity that , whence and . It follows that and , whence .
The converse is also true:
Proposition 3.4. Let be a nonempty set and an index set. Suppose that is a partition of . The relation on defined by declaring if and only if there exists an index such that is an equivalence relation.
Proof. Fix arbitrary . Since is a partition, there exists an such that . Observe that implies ; since was arbitrary, is reflexive. If , then , and so is symmetric. Lastly, if and , then , and so is transitive.
Why do we care about the relationship between partitions and equivalence relations? Equivalence relations are a natural way of declaring different objects to equal each other, as the next example shows.
Example 3.5. Let us now specialize Exercise 3.1 to the case to discuss the so-called clock arithmetic. The equivalence relation , defined in Example 2.3, defines twelve equivalence classes: . By partitioning into twelve pieces, we can represent an arbitrary hour by the numbers , as we would expect from the behaviors of a typical clock.
The collection of all equivalence classes is therefore the set of all “representatives”, after declaring the equality of elements as we desire. We give this collection a name.
Definition 3.6. Let be a nonempty set and an equivalence relation on . The quotient set , called modulo , is the set of all equivalence classes induced by .
Example 3.7. Let be the mod equivalence relation defined in Example 2.3. We write to denote the quotient set and call it the set of integers modulo . We observe that every element of has a representative in the quotient set .
Proposition 3.8. Let be a nonempty set and an equivalence relation on . The canonical surjection map defined by setting is a well-defined surjective function.
Remark. The term “well-defined” means that the relation is actually a function.
Proof. We first note that the value of at a fixed element is unique by the definition of . Moreover, for each , the value exists as an element of . Therefore, is a well-defined function. To show that is surjective, we fix an arbitrary set . By the definition of , we can find an such that , whence . Since was arbitrary, the desired result follows.
4. Cartesian Products
With the notion of functions at hand, we can now generalize the concept of ordered pairs to an arbitrary number of coordinates. The key feature of ordered pairs is that there is a designated output for each coordinate. This is to say that we can think of ordered pairs as functions on the two-element set : in case of , the value corresponding to “first” is , and the value corresponding to “second” is . The cartesian product can then be thought of as the collection of all functions on such that and . We are thus led to the following notion:
Definition 4.1. Let be a collection of sets indexed by . The cartesian product of the sets in the collection is
By specializing this definition to one set , we obtain the
- -fold cartesian product of *
An * -valued -tuple* is an element of .
Note that the usual notion of -tuples from linear algebra and vector calculus coincides with the above definition, with .
It is not entirely clear whether the cartesian product of nonempty sets is nonempty. This issue is resolved in the affirmative if we take for granted the following intuitive statement:
The relation that sends to is a function, whence the composition is an element of . It follows that the cartesian product of nonempty sets is nonempty. We shall study more consequences of the axiom of choice in the subsequent sections.
Note that we have not defined ordered tuples, as we do not yet know what it means to compare the size of two elements of an index set . We shall take up this matter in Section 6.
In real life, we count the number of objects in a collection by exhibiting a one-to-one correspondence between and a subset of the natural numbers: one, two, three, four, and so on. This motivates the following definition:
Definition 5.1. A set is finite if, for some , there exists a bijection ; if not, is said to be infinite. is said to be /1 in case is finite or countably infinite; otherwise, is said to be uncountable.
Example 5.2. is countably infinite. Indeed, the function
is a bijection from to .
Example 5.3. is countably infinite. Here’s a bijection; see Example 6.17 for a rigorous proof in a more general case:
More generally, if and are countably infinite sets, then there exist bujections and , so that the map is a bijection from to . Since the composition of two bijective functions is bijective, we conclude that is countable. We can now show that the set of rational numbers is countably infinite. The function given by the formula is injective. Since is countably infinite, the cartesian product is countably infinite, and we have a bijective . The function is now a composition of two injective functions, whence is injective. What we have shown, intuitively, is that the size of the set is at most that of . Since is a subset of , the size of cannot exceed that of . We therefore should be able to conclude that the size of is precisely that of . A formal argument is as follows. is easily seen to be an infinite set, whence the claim that there exists a bijection from to follows from the lemma below:
It therefore suffices to prove the lemma. To this end, we first show that every subset of has the unique minimal element. If is finite, we can verify this claim by comparing each and every element of . If is infinite, then we can fix and consider . Since every element of is larger than , a minimal element of , if it exists, must be in . Now, is finite, and so we already know how to find its minimal element. To show that the minimal element of is unique, we suppose that are two minimal elements of . By minimality, we have and , whence . Let us now construct the bijection . We declare to be the minimal element of . Then, for each , we define to be the minimal element of . If for some , then is finite, contradicting the assumption that is infinite. The inductive process therefore continues ad infinitum, and is well-defined on all of . By construction, is injective. It remains to show that is surjective. Given , the set is finite of size, say, . It then follows that for some . This completes the proof of the lemma.
Exercise 5.5. Show that the cartesian product of countably infinite sets is countably infinite. To see this, we let be a bijection and define a function by setting
where are distinct prime numbers. Show that is an injection, and apply Lemma 5.4 to derive the desired result.
Example 5.6. The set of all real numbers between 0 and 1 including end points is uncountable. To see this, we suppose for a contradiction that is a bijective function. For each , is a real number between 0 and 1, it admits a decimal representation of the form . Let us make a list:
Since is a bijection, the above list must contain every real number in . We shall now exhibit a number in that does not belong to the above list: since was an arbitrary bijection, it follows that no bijection from to can exist. To this end, we pick each so that . The resulting decimal number
is a number in . Now, for each , we see that cannot equal , as the stipulation implies that the decimal representations of and are different. is therefore a number in that does not belong to the above list. This is the so-called diagonal argument. Similar arguments show that the open interval and the half-open intervals and are uncountable. Given fixed real numbers and such that , the function is a bijection from the closed interval to . If there exists a bijection , then the composition is bijective as well, contradicting the uncountability of . We conclude that is uncountable. Similar arguments show that the open interval and the half-open intervals and are uncountable. Finally, we observe that a set that contains as a subset is uncountable. We suppose for a contradiction that we can find a bijection , whence the restriction is injective. What this implies, intuitively, is that the size of the set is at most . Since we have already shown that is uncountable, we should be able to derive a contradiction from this. Formally, is an infinite subset of , whence the desired contradiction follows from Lemma 5.4. It now follows at once that , the set of complex numbers, and the set of quaternions are uncountable.
The intuition that if the size of is at most that of , and if the size of is at most that of , then and must be of the same size is quite attractive; we now seek to formalize this idea.
Definition 5.7. Let and be sets. and are said to be equinumerous if there exists a bijection . We write to denote that and are equinumerous. In case is not a bijection but only an injection, we say that the cardinality of is at most the cardinality of . and write .
We remark that countably infinite is precisely being equinumerous with . We also observe the following:
Proof. If there exsits an injection , then the codomain restriction of is a bijection. We now define a surjection as follows: fix and set
Conversely, if is a surjection, then the set is a partition of . To see this, we first note that every element of must be mapped to an element of , whence . Moreover, , as a function, can have only one value at each point, whence the preimage sets must be pairwise disjoint. Now, we define a function by defining to be some element of the preimage . Since is a partition, must be injective.
Remark. In the proof, we have tacitly appealed to the axiom of choice (Axiom 4.2). Indeed, we used the choice function to choose an element from each set in the partition . The intuitive principle that we have alluded to above is formally established below:
Proof. We first show that if and if , then . Let be a bijection. We define three collections of sets , , and as follows:
- Let and .
- For each , we define .
- For each , we define .
- For each , we define .
- Let .
By definition, . Since must be a subset of , we see that . We now fix and assume inductively that for all . Since , we see that
It follows that for all . A similar argument shows that for all . Since , we see that . Repeating this process, we conclude that for all . Here is a pictoral description of , , and :
We now claim that for all . To see this, we fix . Pick an arbitrary . Since , we see :wat . Moreover, , and so the injectivity of implies that . It follows that . Since was arbitrary, we conclude that . To show that , we fix an arbitrary . Since , the relation implies that we can find an such that . Now, if , then , which would, in turn, imply that . This is evidently absurd, and so . We conclude that . Since was arbitrary, the desired result follows. The claim is now established. An important consequence of the claim is that . We set and define
is precisely the restriction of the injective function , and so it is injective. Similarly, is the restriction of the identity function, and so it is injective. It follows that is injective. Moreover,
It follows that is a bijection from onto , whence , as was to be shown. Let us now return to the general statement of the theorem. We assume that the two sets and satisfy and . This implies the existence of two injective functions and . This, in particular, implies that the composition is an injective function, and so . Now, we observe that . Since , we conclude from what we have proved above that . By injectivity of , we see that , and thus , as was to be shown.
Many applications of the Cantor–Bernstein theorem require the axiom of choice. We study the cardinality of the -fold cartesian product in Example 6.17.
Aside from the usual ordering relation on sets of numbers such as , , , and , the inclusion relation and the at-most-the-same-cardinality relation defined in Section 5 also compare “sizes” of two objects. Let us now abstract the key properties of these less-than-or-equal-to relations.
Definition 6.1. Let be a nonempty set. A partial order on is a relation on that satisfies the following properties:
(1) reflexivity. for all ;
(2) antisymmetry.if and , then ;
(3) transitivity. if and , then .
A strict partial order on is a relation on that satisfies the following properties:
(1’) irreflexivity. for all ;
(3) transitivity. if and , then .
A (strict) partial order is a (strict) linear order, or a (strict) total order, if satisfies the following additional property:
(4) trichotomy. if , then one of the following three must hold: , , or .
A (strict) linear order is a (strict) well-order if satisfies the following additional property:
(5) well-ordering property. if is a nonempty subset of , then there exists an element such that for all .
Example 6.2. It is easy to see that the usual ordering relations on , , , and , respectively, are linear orders. We have shown in the proof of Lemma 5.4 that is well-ordered. Since the set does not contain a minimal element, none of the other sets of numbers we have just listed is well-ordered.
Example 6.3. The is-a-subset-of relation on the powerset of any set is a partial order. (Please carry out a proof of this fact if you do not see it immediately!) If has two or more elements, however, then is not a linear order on . To see this, we fix two distinct elements and of ; and are incomparable, in the sense that neither is a subset of the other.
Example 6.4. The at-most-the-same-cardinality relation on the powerset of any set is a partial order. Reflexivity is a consequence of the existence of the identity function . Antisymmetry is a consequence of the Cantor–Bernstein theorem. Transitivity follows from the fact that the composition of two injective functions is injective. The last example is worth a second look. It is not entirely obvious whether the relation should be a linear order. If the two sets can be well, ordered, then, intuitively, we can match up the elements of and from the smallest to the largest until the elements of one of the sets runs out. This leads us to formulate the following result:
Note also that the process of matching up the elements of and might require infinitely many steps. Since the usual principle of mathematical induction is a finitary method, we need an extension of the principle that applies to infinite processes as well. For this, we ought to be able to count each step of these processes, which require an infinitary generalization of the natural numbers that admits its own version of the induction principle.
Definition 6.8. An ordinal number is a set that satisfies the following properties:
(1) transitivity. every element of is a subset of ;
(2) strict well-ordering. the is-an-element-of relation is a strict well-order on .
We define the ordering relation by declaring if and only if .
We see at once that is an ordinal number. To fit the natural numbers into the framework of ordinal numbers, we declare to be the empty set and carry out a set-theoretic construction of the natural numbers as follows:
With this definition, we see that coincides with the usual order relation on . Moreover and for each natural number . The set of natural numbers is itself an ordinal number: transitivity is evident from the construction of the natural numbers, and strict well-ordering was already shown in the proof of Lemma 5.4. We write to denote the first infinite ordinal . Note that for all . In fact, . We therefore see that there are two kinds of ordinal numbers:
Definition 6.9. For each ordinal number , we define the successor of to be the set . An ordinal number is a successor ordinal if there exists an ordinal number such that ; if not, then is said to be a limit ordinal.
The natural numbers other than are successor ordinals; and are limit ordinals. Since the union of two sets always exist, the successor of an ordinal always exists. Furthermore, the union
exists and is a limit ordinal, which we denote by . Similarly, we can construct the limit ordinal for each . The analogous union
exists and is a limit ordinal, which we denote by . We shall assume that this process can be continued indefinitely:
Assumption 6.10. We assume that many limit ordinals exist. Indeed, we assume that the following strictly well-ordered sequence of ordinals exists:
Let us establish the principle of transfinite induction that was alluded to above.
Theorem 6.11 (Principle of transfinite induction). Let be a collection of ordinal numbers. If satisfies the following properties, then contains all ordinal numbers:
(2) if , then ;
(3) if is a nonzero limit ordinal and for all ordinals , then .
Remark. The collection that we have referred to in the statement above is actually a class, which, intuitively, is a collection of sets that may be too large to be a set. We will not attempt to formalize the notion of a class.
Proof. Suppose for a contradiction that does not contain all ordinal numbers. Since the ordinals are well-ordered, we can find the least ordinal such that . If is a successor ordinal, then we can find such that . Since , the ordinal must be in , whence by (2) must be in as well. If is a limit ordinal, then (3) and the least-ordinal property of implies that must be in . Both cases reach a contradiction, and so we conclude that must contain all ordinal numbers. We are now ready to prove the well-ordering principle.
Proof of the well-ordering principle. We use transfinite induction to label each element of the set by a unique ordinal, whence can be well-ordered by the well-ordering of the ordinal numbers. By the axiom of choice (Axiom 4.2), there exists a choice function on the set of nonempty subsets of . We set and define inductively for each ordinal by setting
we terminate the process when is empty. We now let be the least ordinal number such that . The set consists precisely of the elments of , labelled by ordinal numbers up to . We define an order on by declaring if and only if or . It follows at once that is a well-order on .
Recall that a usual application of the finitary induction principle takes the form of demonstrating that the construction of a desired mathematical object can be done one step at a time. If the construction cannot be completed in finitely many steps, then the principle of transfinite induction assures that the process will be completed, provided that there are no obstructions on the way—that is, the method of construction behaves well at the limit ordinal stages. We shall now capture this idea in a form that is particularly convenient to use. To this end, we need to introduce a few terms:
Definition 6.12. Let be a nonempty set, be a partial order on , and a subset of . An upper bound of in is an element such that for all . A maximal element of is an element such that no satisfies the relation .
Definition 6.13. Let be a nonempty set and let be a partial order on . A collection of elements of is said to be a chain if is a linear order on , viz., every pair of elements of satisfies the trichotomy principle:
We are now ready to state and prove the ever-famous Zorn’s lemma:
Proof. We use transfinite induction to construct a chain in that contains a maximal element of . By the axiom of choice (Axiom 4.2), there exists a choice function on the set of nonempty subsets of . We let and define inductively for each ordinal by declaring to be an element of such that for each ordinal ; we terminate the process when there is no such element . To check that this process continues until the point of termination, we must check the limit ordinal stages. But indeed, if is a limit ordinal, then is a chain in , whence by assumption we can find an upper bound . We can now find an ordinal such that no satisfies the relation . It follows that is a maximal element of .
Here is an archetypal Zorn-type argument in the context of linear algebra; proofs of this form are humorously referred to as Zornification arguments.
Example 6.15. We prove that every vector space has a basis. In fact, we establish a more general statement:
Let be a vector space over a field . Every set of linearly independent vectors in is contained in a basis of .
Fix a set of linearly independent vectors in . If is a maximal linearly-independent set, viz., every superset of is linearly dependent in , then is a basis of . Indeed, if there exists a vector such that no finite linear combination in equals , then is a linearly independent superset of , contradicting the maximality condition. We therefore assume that is non-maximal. Let be the collection of all linearly independent supersets of in , ordered by the set-inclusion relation: if and only if . If is a chain in , then is an upper bound of in . It now follows from Zorn’s lemma that there exists a maximal element of , which, by construction, is a maximal linearly-independent subset of .
Transfinite induction arguments are often useful in comparing cardinalities of sets as well. We present two examples that makes use of the Cantor–Bernstein theorem (Theorem 5.9).
Example 6.16. Let and be infinite sets such that . We show that equals if , and if . We assume without loss of generality that and define a collection of pairwise-disjoint collections of subsets of such that for all . We define a partial order on by declaring that if and only if and for all . Similarly as in Example 6.15, each chain in admits an upper bound, whence Zorn’s lemma furnishes a maximal collection in .
We claim that . If not, then is nonempty. We note that must be finite. If is infinite, then , and so there exists a countably infinite subset of . Since is a pairwise-disjoint collection that contains , the maximality condition is violated. Since is finite, we see that for all . Fixing and defining
we see that . The maximality condition is once again violated, and so we must conclude that .
For each , we find a bijection . We can now construction a bijection by setting . That is a bijection follows from the fact that is a partition of . We now let and be the sets of even and odd natural numbers, respectively, and let and be bijections. By setting and , we obtain two bijections and .
We now let be an injection given by the relation . We have constructed the following sequences of injections:
We define by setting
is well-defined since . The disjointness condition also implies that is injective, since both and are injective. It now follows that the composite function is injective, and so . The reverse relation is obvious, whence we conclude from the Cantor–Bernstein theorem that .
Example 6.17. Let be an infinite set. We generalize Example 5.3 and show that . Let us define a collection
This collection is nonempty. Since is an infinite set, we see that , whence there is a countably infinite subset of . We have seen in Example 5.3 that , and so is nonempty.
Let us now define a partial order on by declaring if and only if and , viz., if is an extension of . if If is a chain in , then the ordered pair defined by setting and
is an upper bound of the chain. Note that the extension property in the definition of guarantees that is well-defined. We now apply Zorn’s lemma to construct a maximal ordered pair in .
It suffices to check that . We suppose for a contradiction that . If , then we can find a bijection , whence we can define an extension of by setting . As is a bijection, the maximality condition on is violated. We therefore assume that .
Let . We must have that . Indeed, if , then, by Example 6.16, the set is equinumerous to either or , which is absurd. Let us now fix a subset of such that . Since and are disjoint, we see that , , and are pairwise disjoint. It now follows from Example 6.16 that
is equinumerous to . Recalling that and , we conclude that . Let be a corresponding bijection.
We now define a bijection by setting
is well-defined, as . The disjointness condition also implies that is a bijection, as and are bijections. Since , it follows that . contradicting the maximality of . We therefore conclude that , and so .
As an application of the equinumerosity statement, we now show that . By fixing two distinct elements , we can construct an injection by setting
Our goal is to construct an injection in the opposite direction and apply the Cantor–Bernstein theorem. To this end, we shall show that
To this end, we first construct a bijection by defining, for each , the value of to be the function defined by the formula
is a bijection: given , we can recover the set by considering the preimage .
Fix ; the function that is defined by the formula is an injection. Therefore, .
The function that sends to the composite function is a bijection. Indeed, . Therefore, .
Now, we construct a bijection as follows. An element of is a function that sends to a function , so that for all . It is then natural to set for each ordered pair . We note that is a bijection: given , we can recover a -valued function on by setting . It follows that .
To prove the relation , we take the bijection that we have constructed above. The function that sends to the composite function is a bijection, as . We have thus produced the following sequence of injections:
The composite function is an injection from to , and so . Since we have already shown that , it follows from the Cantor–Bernstein theorem that , as was to be shown. Since we have also shown that , we now know that the relation holds as well.
Exercise 6.18. Modify the above proof to show that , provided that and .
7. Additional Remarks and Further Results
7.1. The approach to set theory in this post is naïve in the sense that several assumptions are made in order to bypass the technical details and to streamline the exposition. All assumptions made in this post are provable facts in the standard framework of axiomatic set theory; an excellent exposition of axiomatic set theory at the undegraduate level can be found in the textbook of Hrbacek and Jech.
7.2. We cheated our way through the theory of ordinals by simply assuming the existence of many, many ordinals. In particular, we have completely bypassed the recursion theorem, which validates certain recursive constructions that we have tacitly made use of. See Capter 6 of Hrbacek and Jech.
7.3. We also did not cover the theory of cardinal numbers at all. Since the equinumerosity relation is an equivalence relation, we can declare one set from each “equivalence class” to be the cardinal number representing the size of the sets in the equivalence class. Alternatively, we can define certain ordinal numbers to be cardinal numbers: see the wikipedia article on aleph numbers. An important topic in the theory of cardinal numbers is cardinal arithmetic, which we have sampled in Example 6.16 and Example 6.17. See Chapters 7 and 9 of Hrbacek and Jech.
7.4. The diagonal argument in Example 5.6 can be generalized to prove Cantor’s theorem: for every set . This, in particular, shows that there are many, many cardinal numbers. The question of whether there is a set such that leads us to the continuum hypothesis, which is a foundational problem in the theory of set-theoretic independence. My blog post on the Kirby-Paris Hydra game provides an introduction to the independence issues in set theory; see Section 6 of the post for further references.
7.5. The two main consequences of the axiom of choice in the post—the well-ordering principle and Zorn’s lemma—are provably equivalent to the axiom of choice. The proof that either of the two results implies the axiom of choice can be found in Chapter 8 of Hrbacek and Jech. On the other hand, it is not known whether the partition principle (Proposition 5.8) implies the axiom of choice: see the discussion on Math.SE.
7.6. We omitted an important property of the canonical surjection map with respect to an equivalence relation . The universal property of quotient maps states that if is a function such that whenever , then there exists a unique injective function such that . Variants of this property appear in the context of groups, rings, vector spaces, fields, and other algebraic objects. My blog post on commutative diagrams contains the statement of the universal property of quotient maps in the context of group theory.
Thanks to Colin Beeh, John Ryan, and LDH for corrections! |
Students read and discuss picture book biographies of women [and men] in history. With their teacher, they build a data chart of information about each woman, highlighting her historical setting, accomplishments, and character traits. Finally students apply what they learn to several writing projects focused on historical context and social change. While the focus of biography is on individuals, students will see they did not, and could not, succeed alone but were supported along the way by others.
- Arts, Humanities, Social Sciences
- Grade Level:
- Primary, Secondary
- Center for History and New Media |
5km across, the fireball erupted from the island of Elugelab and engulfed the sky. The shock wave vaporised everything within 5km, and scraped the neighbouring islands clean, no buildings or plants remained. 2 hours later some helicopters flew over what used to be Elugelab. The island was gone. In its place was a dark blue welt in the ocean, 2km across, and deep enough to hold a 17 story building. The island had been vaporised. It was 1952, and the largest bomb in the world had just been detonated.
The United States made the bomb because it was afraid. In late 1949 the Soviet Union had created and detonated ‘First Lightning’ – a nuclear bomb just like those dropped at the end of World War II. The United States was no longer the only nuclear superpower. Tensions escalated, and they needed something new. They were going to need a bigger bomb.
In January 1950, President Truman announced that the United States would develop a new bomb, superior to the A-bomb. A hydrogen bomb that would push the United States into the Thermonuclear era. Unfortunately nobody knew how to make the H-bomb.
H-bombs are thermonuclear, meaning nuclear fusion. They make heat in the same way the sun, and billions of others stars make their energy. Two small atoms like hydrogen hit each other and combine to make a larger atom, at the same time they release large amounts of energy. The problem is that fusion needs immense heat and pressure. That difficulty is why it was happening easily in the sun, but not so much on Earth.
In 1951 Stanislaw Ulam and Edward Teller overcame that barrier. With their combined ideas, thermonuclear bombs were possible – in theory. To test the theory, they needed an experiment. Project Ivy was started, and it was the perfect opportunity to test.
The Building Bomb
Project Ivy was aimed at improving U.S. nuclear weapons in two ways. The first was the H-bomb, the other was making a larger, A-bomb. The H-bomb was Ivy Mike, at its construction it was the largest, heaviest and most powerful bomb in existence. I say bomb, it was closer to a factory-sized nuclear fridge.
Mike was not a bomb ready to be dropped from a plane, it was designed purely as an experiment, so it looked like an aircraft hangar or factory. It was assembled in the Pacific proving grounds, on Elugelab, a small island on Enewetak atoll. The main bomb assembly was over 6 metres tall and 2 metres wide. Covered in metal case 30cm thick, it was very large, shiny and cold. They nicknamed it “Sausage.” Sausage weighed a dainty 56 metric tonnes. |
A real-time system includes hardware and software components that enable precise control over the execution of your code. You use a PC to develop code for a real-time system.
The following figure shows a basic setup for developing a real-time application.
- Development PC—The PC manages connections between devices in the system and provides a graphical environment to create and edit real-time code. When code runs on a real-time controller, you can use the PC as the user interface to modify VI panels and view data from the controller.
- Real-time controller—A real-time controller runs a Linux real-time operating system and software that allows you to set precise timing directives and deterministic execution for real-time code. The real-time controller can also provide precise timing when communicating with FPGAs and I/O hardware. For example, if you need a set of FPGAs to each perform a unique data processing task in a particular order, you can use a real-time controller to communicate directly with the FPGAs to guarantee that each FPGA in the sequence receives instructions at a precise time.
- Real-time system—A real-time system includes the chassis, real-time controller, and other devices on the chassis. You register the real-time system to access the real-time controller and other devices in the chassis.
- Network connection—A PC and a real-time system must share a network connection so the PC can detect and register the system. Over this network connection, the PC deploys code to the real-time controller and acts as a live graphical user interface for the real-time controller. |
Science 3 January 2014:
Vol. 343 no. 6166 pp. 18-23
Archaeologists are gaining a new perspective on why ancient Britons erected great henge and circle monuments like Stonehenge. Recent studies emphasize how the work of building monuments brought geographically dispersed communities together. Surprisingly, the stone circles were part of a package of innovation that began in Scotland's far northern Orkney Islands and later spread south to transform the British landscape. |
Military recruitment is recruitment for military positions, that is, the act of requesting people, usually male adults, to join a military voluntarily. Involuntary military recruitment is known as conscription. Even before the era of all-volunteer militaries, recruitment of volunteers was an important component of filling military positions, and in countries that have abolished conscription, it is the sole means. To facilitate this process, armed forces have established recruiting commands.
Military recruitment can be considered part of military science if analysed as part of military history. Acquiring large amounts of forces in a relatively short period of time, especially voluntarily, as opposed to stable development, is a frequent phenomenon in history. One particular example is the regeneration of the military strength of the Communist Party of China from a depleted force of 8,000 following the Long March in 1934 into 2.8 million near the end of the Chinese Civil War 14 years later.
Recent cross-cultural studies suggest that, throughout the world, the same broad categories may be used to define recruitment appeals. They include war, economic motivation, education, family and friends, politics, and identity and psychosocial factors.
Wartime recruitment strategies in the US
Prior to the outbreak of World War I, military recruitment in the US was conducted primarily by individual states. Upon entering the war, however, the federal government took an increased role.
The increased emphasis on a national effort was reflected in World War I recruitment methods. Peter A. Padilla and Mary Riege Laner define six basic appeals to these recruitment campaigns: patriotism, job/career/education, adventure/challenge, social status, travel, and miscellaneous. Between 1915 and 1918, 42% of all army recruitment posters were themed primarily by patriotism. And though other themes - such as adventure and greater social status - would play an increased role during World War II recruitment, appeals to serve one’s country remained the dominant selling point.
Recruitment without conscription
In the aftermath of World War II military recruitment shifted significantly. With no war calling men and women to duty, the United States refocused its recruitment efforts to present the military as a career option, and as a means of achieving a higher education. A majority - 55% - of all recruitment posters would serve this end. And though peacetime would not last, factors such as the move to an all-volunteer military would ultimately keep career-oriented recruitment efforts in place. The Defense Department turned to television syndication as a recruiting aid from 1957-1960 with a filmed show, Country Style, USA.
On February 20, 1970, the President’s Commission on an All-Volunteer Armed Force unanimously agreed that the United States would be best served by an all-volunteer military. In supporting this recommendation, the committee noted that recruitment efforts would have to be intensified, as new enlistees would need to be convinced rather than conscripted. Much like the post-World War II era, these new campaigns put a stronger emphasis on job opportunity. As such, the committee recommended “improved basic compensation and conditions of service, proficiency pay, and accelerated promotions for the highly skilled to make military career opportunities more attractive.” These new directives were to be combined with “an intensive recruiting effort.” Finalized in mid-1973, the recruitment of a “professional” military was met with success. In 1975 and 1976, military enlistments exceeded expectations, with over 365,000 men and women entering the military. Though this may, in part, have been the result of a lack of civilian jobs during the recession, it nevertheless stands to underline the ways in which recruiting efforts responded to the circumstances of the time.
Indeed, recommendations made by the President's Commission continue to work in present-day recruitment efforts. Understanding the need for greater individual incentive, the US military has re-packaged the benefits of the GI Bill. Though originally intended as compensation for service, the bill is now seen as a recruiting tool. Today, the GI Bill is "no longer a reward for service rendered, but an inducement to serve and has become a significant part of recruiter's pitches.”
Recruitment can be conducted over the telephone with organized lists, through email campaigns and from face to face prospecting. While telephone prospecting is the most efficient, face to face prospecting is the most effective. Military recruiters often set up booths at amusement parks, sports stadiums and other attractions. In recent years social media has been more commonly used.
Military recruitment in the United Kingdom
During both world wars and a period after the second, military service was mandatory for at least some of the British population. At other times, techniques similar to those outlined above have been used. The most prominent concern over the years has been the minimum age for recruitment, which has been 16 for many years. This has now been raised to 18 in relation to combat operations. In recent years, there have been various concerns over the techniques used in (especially) army recruitment in relation to the portrayal of such a career as an enjoyable adventure.
Military recruitment in the United States
The American military has had recruiters since the time of the colonies in the 1700s. Today there are thousands of recruiting station across the United States, serving the Army, Navy, Marines, and Air Force. Recruiting offices normally consist of 2-8 recruiters between the ranks of E-5 and E-7. When a potential applicant walks into a recruiting station his or her height and weight are checked and their background investigated. A finger print scan is conducted and a practice ASVAB exam is given to them. Applicants can not officially swear their enlistment oath in the recruiting office. This is conducted at a Military Entrance Processing Station - MEPS.
Military recruitment in India
From the times of the British Raj, recruitment in India has been voluntary. Using Martial Race theory, the British recruited heavily from selected communities for service in the colonial army. The largest of the colonial military forces the British Indian Army of the British Raj until Military of India, was a volunteer army, raised from the native population with British officers. The Indian Army served both as a security force in India itself and, particularly during the World Wars, in other theaters. About 1.3 million men served in the First World War. During World War II, the British Indian Army would become the largest volunteer army in history, rising to over 2.5 million men in August 1945.
A recruitment centre in the UK, recruiting station in the U.S., or recruiting office in New Zealand, is a building used to recruit people into an organization, and is the most popularized method of military recruitment. The U.S. Army refers to their offices as recruiting centers as of 2012. |
As suggested by the activity "From Graphs to Stories" (NCTM, Navigating Through Algebra in Grades 6-8,
29), students should be asked to associate real-life meaning to
situations represented graphically. This lesson provides students with
a graph of an authentic situation. Students can use any method they
like to determine the meaning of points and slope on the graph as it
relates to the situation of three people biking uphill.
Distribute the Pedal Power Activity Sheet, which shows the distance-time graph for three cyclists.
To begin the lesson, present students with the following situation:
Bicyclists claim that the longest steep hill in the world is in
Haleakala National Park, and they have the sore muscles to prove it!
The hill leads up a volcano on the island of Maui, Hawaii. Over the
course of a 38-mile road, this hill rises from sea level at the coast
to over 10,000 feet.
Three proficient cyclists—Laszlo, Cliantha, and Joseph—rode this
entire hill to the top. They started together at the bottom of the
volcano, and they reached the top at the same time. The graph shows the
distance of each cyclist with respect to time.
To heighten interest in the problem, you may wish to show
students pictures of the Haleakala National Park or provide some
background information. Use a simple internet search to find these images.
Students may also want to use the Internet to find information on bicycles and speeds that can be maintained when riding uphill.
To get students thinking about the situation, ask the following warm-up questions:
- Lance Armstrong’s average speed in his six Tour de France victories
from 1999-2004 was about 24 miles per hour. Assuming that he pedals at
his average speed and takes no breaks, how long would it take him to
get to the top of the volcano?
- People who aren’t Lance Armstrong can travel at about 12
miles per hour on a bike. At that speed, how long would it take to
reach the top of the volcano?
[At 24 miles an hour, it would take 38/24 hours, or about 1 hour,
35 minutes, for Lance Armstrong to climb the hill. At 12 miles an hour,
it would take 38/12 hours, or about 3 hours, 10 minutes, for an average
biker to climb it. However, both of these estimates are probably too
low, as all bikers travel slower going uphill.]
After a brief discussion, allow students to consider
each question on the activity sheet individually. Then, have students
share their thoughts with a small group. Each group should reach
consensus and then present their results to the class. A whole-class
discussion should follow, focusing on the question groups below:
- Estimate the vertical coordinate of B. Justify your guess.
- Estimate the horizontal coordinate of B? Justify your guess.
- What are the coordinates of B?
- What are the coordinates of A? Explain your answer.
- What are the coordinates of C? Explain your answer.
[A biker on flat ground can average about 12 miles per hour.
Climbing Haleakala, an average biker would likely go much slower, maybe
5-9 miles per hour. Therefore, it would take about 4-7 hours to
complete the ride up Haleakala, so a reasonable estimate for the x‑coordinate of B is about 5 hours. The distance to the top of Haleakala is 38 miles, so the y‑coordinate
of B is 38 miles. With B at (5,38), then students might estimate the
coordinates of A to be roughly (2,26) and the coordinates of C to be
- Which cyclist had a steady speed all the way up the hill? How do you know?
- Which cyclist was slow at first and then sped up? How do you know?
- How would you describe Laszlo’s speed?
[Cliantha held a consistent pace up the hill, because the slope of
her line never changed. Joseph started slowly and then increased his
speed, which is evident by an increase in slope. On the other hand,
Laszlo started very quickly but then slowed down, because the slope of
his line decreased.]
- The three cyclists started together at the bottom, and they reached
the top at the same time. Is there any other time that Laszlo,
Cliantha, and Joseph were at the same height at the same time? How do
[No, there are no other times when they were at the same height. If
they were, their lines would cross at locations other than O and B.]
- Find the slope of each line segment on the graph. What does each slope mean in the context of the problem?
[Assuming a time of 5 hours to travel the 38 miles up Haleakala, the
slope of Cliantha’s line is 38/5 = 7.6. This means that Cliantha’s
speed was 7.6 miles per hour for the entire trip. From the bottom to A,
the slope is 26/2 = 13, meaning that Laszlo’s average speed for the
first portion of the ride was about 13 miles per hour. He then slowed
down, and his speed dropped to (38 ‑ 26) / (5 ‑ 2) = 4 miles per hour
for the remainder. From the bottom to C, the slope is 13/3 ≈ 4.3, and
from C to the top, the slope is (38 ‑ 13) / (5 ‑ 3) = 12.5. This
indicates that Joseph’s speed increased from 4.3 miles per hour to
12.5 miles per hour.] |
Gender Segregation Among Childhood Friends
Another prominent feature of children's friendships is gender segregation—the tendency of children to associate with others of their same sex. Consider the situation we observed while testing 4-year-old children in a preschool. As the children returned from their outside play period, a new boy in class took a seat in a circle of chairs. Several other boys ran immediately to him, yelling, "Get up, that's where the girls sit!" Hearing this, the new boy leaped up and began to furiously dust off the back of his pants! What did he think was on the chair? Cooties?
There is no doubt that gender segregation exists. In fact, it is nearly universal, occurring in every cultural setting in which researchers have observed children selecting playmates (Fabes, Martin, & Hanish, 2003; Whiting & Edwards, 1988). But how does it begin, and why? There are no clear answers to these questions, but we can learn more by looking at how gender segregation evolves across childhood and adolescence.
By 2 to 3 years of age, children are beginning to show a clear preference for playing with other children of their own sex (Serbin, Moller, Gulko, Powlishta, & Colburne, 1994). At this age children are more interactive and sociable when playing with same-sex friends. When they are with the opposite sex, they tend to watch or play alongside the other child rather than interact directly. Gender segregation is very prominent after the age of 3. Preschool children spend very little time playing one-on-one with the opposite sex. They spend some time in mixed-sex groups but spend most of their time, by far, playing with same-sex peers. By 6 years, segregation is so firm that if you watch 6-year-olds on the playground, you should expect to see only 1 girl-boy group for every 11 boy-boy or girl-girl groups (Maccoby & Jacklin, 1987).
Reasons for Gender Segregation
Why does gender segregation exist? Let's consider the most prominent theories.
- Play compatibility: Some researchers believe that gender segregation occurs because children seek partners whose play styles match or complement their own (Serbin et al., 1994). With toddlers and young children, the first to segregate tend to be the most active and disruptive boys and the most socially sensitive girls (Fabes, 1994; Serbin et al., 1994). Both types of children prefer to play with others like them.
- Cognitive schemas: Children develop concepts or ideas (schemas) about what boys and girls are typically like. These concepts include stereotyped, and often exaggerated, notions about gender differences. Examples: "Boys are rough and like to fight and play with trucks" and "Girls are nice and like to talk and play with dolls." Children use these cognitive schemas as filters when they judge themselves and observe other children (Martin, 1994). "I am a boy, so I like to play with trucks" is a concept that may lead boys to seek each other as playmates. Schemas can also cause children to filter out or misremember instances that contradict the schema. Children discount the number of times they've seen girls play ball and boys play with dolls, for example. As children learn gender-based schemas, their play and playmate preferences become more segregated.
- Operant conditioning: Reward and punishment also contribute to gender segregation. Boys in particular—like the new boy in preschool who sat "where the girls sit"—tend to incur harsh criticism when they cross gender lines to play with girls (Fagot, 1977, 1994; Fagot & Patterson, 1969). For most boys, being called a "sissy" is a major insult. Although some girls revel in being "tomboys," others can feel conflicted about being associated with stereotypically masculine activities. Whether consciously or only inadvertently, parents, teachers, peers, the media, and others contribute to gender segregation by reinforcing or rewarding sex-typed behaviors in boys and girls and by punishing behavior that does not conform to stereotypes.
- Psychoanalytic theory: One of the oldest views on gender segregation was the theory formulated by Sigmund Freud. Although Freud's view isn't given much credence today, he offered the explanation that gender segregation occurs as children repress their sexual feelings during the latency stage of development. That is, children avoid interactions with the opposite sex to avoid the guilty feelings they associate with sexuality. During this stage, children channel their energies into less threatening pursuits such as collecting trading cards or dolls. When they play with opposite-sex friends, children often get teased about being "in love" or "going with" their friend. "No boys allowed!" is the warning posted on many girls' playhouses, and boys reciprocate—at least until the onset of puberty changes things.
© ______ 2009, Allyn & Bacon, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
Add your own comment
Today on Education.com
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Bullying in Schools
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Should Your Child Be Held Back a Grade? Know Your Rights
- First Grade Sight Words List |
The Bible describes a frightening, huge creature that lived in the sea. Called Leviathan, it had fearsome teeth and armor-plated scales that spears and arrows could not penetrate. A Norse legend tells of a colossal sea creature with many arms that could capsize a sailing ship. In 1918, a group of lobster fishermen in Port Stephens, Australia, reported an encounter with “an immense shark of almost unbelievable proportions.”
Fossil evidence shows that the oceans once teemed with sea monsters, and some are still alive today.
With incredible detail, this books describes:
- Mosasaurus, the T-Rex of the sea
- Megalodon, the shark longer than a school bus
- Sarcosuchus,the “super croc” that weighed up to 8 tons
- Archelon, the sea turtle as large as a Volkswagen Bettle
- And many other sea-dwelling monsters, past and present
Each creature is featured with full-color illustrations, size comparisons, skeleton drawings, and fast facts section about how these creatures lived and died.
Hardback: 76 pages |
s (either still
) must overcome the problem of capturing the real world
in full colour
. Images are generally reproduced using either an RGB
scheme. It makes sense, therefore, to use a similar scheme to capture the image at source.
Real world objects, of course, produce full spectrum radiation. The use of RGB and other schemes relies on the human eye's inherent properties - that is, it sees red, green and blue (in a fairly narrow field) and general intensity and lets the brain interpolate the world from that (but that a story for another node).
So the digital camera emulates the retina to a degree. It doesn't need to capture the raw photons and measure their frequency. It need merely capture enough information to allow a convincing image to be built later. The real world can be filtered down to some pretty simple components. But how to do it?
Some of the methods are:
- split the beam and filter in parallel
- capture multiple images and filter sequentially
- filter at the pixel level of a single image
All the options have advantages and disadvantages. The last has some key advantages for small devices: it's cheap and compact.
Now the issue of choosing a filter must be addressed. Again, we can look to the human eye. As mentioned above, the eye is sensitive to red, green, blue and intensity. In fact, it's more sensitive to green and everything is relative to intensity. So, we can model this with the filter we choose. If we divide our sensor into pixels (as is done in a CCD), we must choose what filter to place over each pixel. The Bayer filter is one of the three most common options.
Imagine if you will (or see below for URLs) a mask over the CCD that repeats the following pattern:
Each square of four pixels captures twice as much green as red or blue. Computational methods (see Bayer Pattern
for technicalities) are then used to produce an image file in conventional RGB format.
One alternative worth mentioning here is the Complementary Colour Mosaic Filter, which uses the following mask:
The benefit here is that the filter allows twice as many photons to pass. It's the way a filter works. A green filter
takes white light
and eliminates two thirds of the visible spectrum
(the eliminated spectrum would appear magenta
); a magenta filter
takes white light
and eliminates one third of the visible spectrum
(the eliminated spectrum would appear green
). In a complementary filter system, you get to detect twice as many photons for the same information but the maths is marginally harder.)
laconic says there's a variation on the above that introduces a green filter to enhance the accuracy of green sensitivity (again to be in keeping with the eye's response). I'll add a reference when available.
For how the eye detects colour, see
until I find an E2 node. |
A pentatonic scale is a musical scale with five notes per octave, in contrast to the more familiar heptatonic scale that has seven notes per octave (such as the major scale and minor scale). Pentatonic scales are encountered all over the world, for example (just to name a few) Chinese music and US country music and blues.
Pentatonic scales are divided into those with semitones (hemitonic) and those without (anhemitonic).
Pentatonic scales occur in the following traditions:
- Peruvian Chicha cumbia
- Celtic folk music
- English folk music
- German folk music
- Nordic folk music
- Hungarian folk music
- Croatian folk music
- West African music
- African-American spirituals
- Gospel music
- Bluegrass music
- American folk music
- Music of Ethiopia
- Rock music
- Sami joik singing
- Children's song
- The music of ancient Greece
- Music of southern Albania
- Folk songs of peoples of the Middle Volga region (such as the Mari, the Chuvash and Tatars)
- The tuning of the Ethiopian krar and the Indonesian gamelan
- Philippine kulintang
- Native American music, especially in highland South America (the Quechua and Aymara), as well as among the North American Indians of the Pacific Northwest
- Most Turkic, Mongolic and Tungusic music of Siberia and the Asiatic steppe is written in the pentatonic scale
- Melodies of China, Korea, Laos, Thailand, Cambodia, Malaysia, Japan, and Vietnam (including the folk music of these countries)
- Andean music
- Afro-Caribbean music
- Polish highlanders from the Tatra Mountains
- Western Impressionistic composers such as French composer Claude Debussy.
Examples of its use include Chopin's Etude in G-flat major, op. 10, no. 5, the "Black Key" etude, in the major pentatonic.
Hemitonic and anhemitonic
scale on D, equivalent to in
scale on D, with brackets on fourths Play (helpinfo)
Musicology commonly classifies pentatonic scales as either hemitonic or anhemitonic. Hemitonic scales contain one or more semitones and anhemitonic scales do not contain semitones. (For example, in Japanese music the anhemitonic yo scale is contrasted with the hemitonic in scale.) Hemitonic pentatonic scales are also called "ditonic scales", because the largest interval in them is the ditone (e.g., in the scale C-E-F-G-B-C, the interval found between C-E and G-B). This should not be confused with the identical term also used by musicologists to describe a scale including only two notes.
Major pentatonic scale
Anhemitonic pentatonic scales can be constructed in many ways. The major pentatonic scale may be thought of as a gapped or incomplete major scale. However, the pentatonic scale has a unique character and is complete in terms of tonality. One construction takes five consecutive pitches from the circle of fifths; starting on C, these are C, G, D, A, and E. Transposing the pitches to fit into one octave rearranges the pitches into the major pentatonic scale: C, D, E, G, A.
Another construction works backward: It omits two pitches from a diatonic scale. If one were to begin with a C major scale, for example, one might omit the fourth and the seventh scale degrees, F and B. The remaining notes then makes up the major pentatonic scale: C, D, E, G, and A.
Omitting the third and seventh degrees of the C major scale obtains the notes for another transpositionally equivalent anhemitonic pentatonic scale: F, G, A, C, D. Omitting the first and fourth degrees of the C major scale gives a third anhemitonic pentatonic scale: G, A, B, D, E.
The black keys on a piano keyboard comprise a G-flat major (or equivalently, F-sharp major) pentatonic scale: G-flat, A-flat, B-flat, D-flat, and E-flat, which is exploited in Chopin's black key e;tude.
Minor pentatonic scale
Although various hemitonic pentatonic scales might be called minor, the term is most commonly applied to the relative minor pentatonic derived from the major pentatonic, using scale tones 1, 3, 4, 5, and 7 of the natural minor scale. It may also be considered a gapped blues scale. The C minor pentatonic is C, E-flat, F, G, B-flat. The A minor pentatonic, the relative minor of C, comprises the same tones as the C major pentatonic, starting on A, giving A, C, D, E, G. This minor pentatonic contains all three tones of an A minor triad.
Five black-key pentatonic scales of the piano
The five pentatonic scales found by running up the black keys on the piano are:
||A C D E G A, D F G A C D or E G A B D E
||C D E G A C, F G A C D F or G A B D E G
||D E G A C D, G A C D F G or A B D E G A
||Blues minor, Man Gong ()
||E G A C D E, A C D F G A or B D E G A B
||Blues major, Ritsusen (), yo scale
||G A C D E G, C D F G A C or D E G A B D
- (A minor seventh can be 7:4, 16:9, or 9:5; a major sixth can be 27:16 or 5:3. Both were chosen to minimize ratio parts.)
Ricker assigned the major pentatonic scale mode I while Gilchrist assigned it mode III.
Ben Johnston gives the following Pythagorean tuning for the minor pentatonic scale:
Naturals in that table are not the alphabetic series A to G without sharps and flats: Naturals are reciprocals of terms in the Harmonic series (mathematics), which are in practice multiples of a fundamental frequency. This may be derived by proceeding with the principle that historically gives the Pythagorean diatonic and chromatic scales, stacking perfect fifths with 3:2 frequency proportions (C-G-D-A-E). Considering the anhemitonic scale as a subset of a just diatonic scale, it is tuned thus: 20:24:27:30:36 (A-C-D-E-G = 5/3-1/1-9/8-5/4-3/2). Assigning precise frequency proportions to the pentatonic scales of most cultures is problematic as tuning may be variable.
For example, the slendro anhemitonic scale and its modes of Java and Bali are said to approach, very roughly, an equally-tempered five-note scale, but their tunings vary dramatically from gamelan to gamelan.
Composer Lou Harrison has been one of the most recent proponents and developers of new pentatonic scales based on historical models. Harrison and William Colvig tuned the slendro scale of the gamelan Si Betty to overtones 16:19:21:24:28 (1/1-19/16-21/16-3/2-7/4). They tuned the Mills gamelan so that the intervals between scale steps are 8:7-7:6-9:8-8:7-7:6. (1/1-8/7-4/3-3/2-12/7-2/1 = 42:48:56:63:72)
Further pentatonic musical traditions
The major pentatonic scale is the basic scale of the music of China and the music of Mongolia as well as many Southeast Asian musical traditions such as that of the Karen people (whose music has sometimes been described as sounding "Scottish"). The fundamental tones (without meri or kari techniques) rendered by the five holes of the Japanese shakuhachi flute play a minor pentatonic scale. The yo scale used in Japanese shomyo Buddhist chants and gagaku imperial court music is an anhemitonic pentatonic scale shown below, which is the fourth mode of the major pentatonic scale.
In Javanese gamelan music, the slendro scale has five tones, of which four are emphasized in classical music( MIDI sample (helpinfo)). Another scale, pelog, has seven tones, and is generally played using one of three five-tone subsets known as pathet, in which certain notes are avoided while others are emphasized.
Ethiopian music uses a distinct modal system that is pentatonic, with characteristically long intervals between some notes. As with many other aspects of Ethiopian culture and tradition, tastes in music and lyrics are strongly linked with those in neighboring Eritrea, Somalia, Djibouti and Sudan.
In Scottish music, the pentatonic scale is very common. Seumas MacNeill suggests that the Great Highland bagpipe scale with its augmented fourth and diminished seventh is "a device to produce as many pentatonic scales as possible from its nine notes". Roderick Cannon explains these pentatonic scales and their use in more detail, both in Piobaireachd and light music. It also features in Irish traditional music, either purely or almost so. The minor pentatonic is used in Appalachian folk music. Blackfoot music most often uses anhemitonic tetratonic or pentatonic scales.
In Andean music, the pentatonic scale is used substantially minor, sometimes major, and seldom in scale. In the most ancient genres of Andean music being performed without string instruments (only with winds and percussion), pentatonic melody is often leaded with parallel fifths and fourths, so formally this music is hexatonic. Hear example: Pacha Siku (helpinfo).
Jazz music commonly uses both the major and the minor pentatonic scales. Pentatonic scales are useful for improvisers in modern jazz, pop, and rock contexts because they work well over several chords diatonic to the same key, often better than the parent scale. For example, the blues scale is predominantly derived from the minor pentatonic scale, a very popular scale for improvisation in the realms of blues and rock alike. Rock guitar solo almost all over B minor pentatonic (helpinfo) For instance, over a C major triad (C, E, G) in the key of C major, the note F can be perceived as dissonant as it is a half step above the major third (E) of the chord. It is for this reason commonly avoided. Using the major pentatonic scale is an easy way out of this problem. The scale tones 1, 2, 3, 5, 6 (from the major pentatonic) are either major triad tones (1, 3, 5) or common consonant extensions (2, 6) of major triads. For the corresponding relative minor pentatonic, scale tones 1, ?3, 4, 5, ?7 work the same way, either as minor triad tones (1, ?3, 5) or as common extensions (4, ?7), as they all avoid being a half step from a chord tone.
U.S. military cadences, or jodies, which keep soldiers in step while marching or running, also typically use pentatonic scales.
Hymns and other religious music sometimes use the pentatonic scale; for example, the melody of the hymn "Amazing Grace", one of the most famous pieces in religious music.
The common pentatonic major and minor scales (C-D-E-G-A and C-E?-F-G-B?, respectively) are useful in modal composing, as both scales allow a melody to be modally ambiguous between their respective major (Ionian, Lydian, Mixolydian) and minor (Aeolian, Phrygian, Dorian) modes (Locrian excluded). With either modal or non-modal writing, however, the harmonization of a pentatonic melody does not necessarily have to be derived from only the pentatonic pitches.
Use in education
The pentatonic scale plays a significant role in music education, particularly in Orff-based, Kodly-based, and Waldorf methodologies at the primary or elementary level.
The Orff system places a heavy emphasis on developing creativity through improvisation in children, largely through use of the pentatonic scale. Orff instruments, such as xylophones, bells and other metallophones, use wooden bars, metal bars or bells, which can be removed by the teacher, leaving only those corresponding to the pentatonic scale, which Carl Orff himself believed to be children's native tonality.
Children begin improvising using only these bars, and over time, more bars are added at the teacher's discretion until the complete diatonic scale is being used. Orff believed that the use of the pentatonic scale at such a young age was appropriate to the development of each child, since the nature of the scale meant that it was impossible for the child to make any real harmonic mistakes.
In Waldorf education, pentatonic music is considered to be appropriate for young children due to its simplicity and unselfconscious openness of expression. Pentatonic music centered on intervals of the fifth is often sung and played in early childhood; progressively smaller intervals are emphasized within primarily pentatonic as children progress through the early school years. At around nine years of age the music begins to center on first folk music using a six-tone scale, and then the modern diatonic scales, with the goal of reflecting the children's developmental progress in their musical experience. Pentatonic instruments used include lyres, pentatonic flutes, and tone bars; special instruments have been designed and built for the Waldorf curriculum.
- ^ a b c d Bruce Benward and Marilyn Nadine Saker (2003), Music: In Theory and Practice, seventh edition (Boston: McGraw Hill), vol. I, p. 37. ISBN 978-0-07-294262-0.
- ^ Bruce Benward and Marilyn Nadine Saker, Music in Theory and Practice, eighth edition (Boston: McGraw Hill, 2009): vol. II, p. 245. ISBN 978-0-07-310188-0.
- ^ June Skinner Sawyers (2000). Celtic Music: A Complete Guide. [United States]: Da Capo Press. p. 25. ISBN 978-0-306-81007-7.
- ^ Ernst H. Meyer, Early English Chamber Music: From the Middle Ages to Purcell, second edition, edited by Diana Poulton (Boston: Marion Boyars Publishers, Incorporated, 1982): p. 48. ISBN 9780714527772.
- ^ Judit Frigyesi (2013). Is there such a thing as Hungarian-Jewish music? In Pl Hatos & Attila Novk (Eds.) (2013). Between Minority and Majority: Hungarian and Jewish/Israeli ethnical and cultural experiences in recent centuries. Budapest: Balassi Institute. page 129. ISBN 978-963-89583-8-9.
- ^ a b Benjamin Suchoff (Ed.) (1997). Be;la Bartk Studies in Ethnomusicology. Lincoln: U of Nebraska Press. p. 198. ISBN 0-8032-4247-6.
- ^ a b c Richard Henry (n.d.). Culture and the Pentatonic Scale: Exciting Information On Pentatonic Scales. n.p.: World Wide Jazz. p. 4.
- ^ Erik Halbig (2005). Pentatonic Improvisation: Modern Pentatonic Ideas for Guitarists of All Styles, Book & CD. Van Nuys, CA.: Alfred Music Publishing. p. 4. ISBN 978-0-7390-3765-2.
- ^ Lenard C. Bowie, DMA (2012). AFRICAN AMERICAN MUSICAL HERITAGE: An Appreciation, Historical Summary, and Guide to Music Fundamentals. Philadelphia: Xlibris Corporation. p. 259. ISBN 978-1-4653-0575-6.
- ^ Jesper Rbner-Petersen (2011). The Mandolin Picker's Guide to Bluegrass Improvisation. Pacific, MO.: Mel Bay Publications. p. 17. ISBN 978-1-61065-413-5.
- ^ William Duckworth (2009). A Creative Approach to Music Fundamentals. Boston: Schirmer / Cengage Learning. p. 203. ISBN 1-111-78406-X.
- ^ Kurt Johann Ellenberger (2005). Materials and Concepts in Jazz Improvisation. Grand Rapids, Mich.: Keystone Publication / Assayer Publishing. p. 65. ISBN 978-0-9709811-3-4.
- ^ Edward Komara, (Ed.) (2006). Encyclopedia of the Blues. New York: Routledge. p. 863. ISBN 978-0-415-92699-7.
- ^ Joe Walker (6 January 2012). "The World's Most-Used Guitar Scale: A Minor Pentatonic". DeftDigits Guitar Lessons.
- ^ Kathryn Burke. "The Sami Yoik". Sami Culture.
- ^ Jeremy Day-O'Connell (2007). Pentatonicism from the Eighteenth Century to Debussy. Rochester: University Rochester Press. p. 54. ISBN 978-1-58046-248-8.
- ^ M. L. West (1992). Ancient Greek Music. Oxford: Clarendon Press. pp. 163-64. ISBN 0198149751.
- ^ A.-F. Christidis; Maria Arapopoulou; Maria Christi (2007). A History of Ancient Greek: From the Beginnings to Late Antiquity. Cambridge: Cambridge University Press. p. 1432. ISBN 978-0-521-83307-3.
- ^ Meri-Sofia Lakopoulos (2015). The Traditional Iso-polyphonic song of Epirus. The International Research Center for Traditional Polyphony. June 2015, Issue 18. p.10.
- ^ Spiro J. Shetuni (2011). Albanian Traditional Music: An Introduction, with Sheet Music and Lyrics for 48 Songs. Jefferson, N. C.: McFarland. p. 38. ISBN 978-0-7864-8630-4.
- ^ Simon Broughton; Mark Ellingham; Richard Trillo (1999). World Music: Africa, Europe and the Middle East. London: Rough Guides. p. 160. ISBN 978-1-85828-635-8.
- ^ Mark Phillips (2002). GCSE Music. Oxford: Heinemann. p. 97. ISBN 978-0-435-81318-5.
- ^ Willi Apel (1969). Harvard Dictionary of Music. Cambridge, Mass.: Harvard University Press. p. 665. ISBN 978-0-674-37501-7.
- ^ Dale A. Olsen & Daniel E. Sheehy (Eds.) (1998). The Garland Encyclopedia of World Music, Volume 2: South America, Mexico, Central America, and the Caribbean. New York: Taylor & Francis. p. 217. ISBN 0824060407. (Thomas Turino (2004) points out that the pentatonic scale, although widespread, cannot be considered to be predominant in the Andes: Local practices among the Aymara and Kechua in Conima and Canas, Southern Peru in Malena Kuss (Ed.) (2004). Music in Latin America and the Caribbean: an encyclopedic history. Austin: University of Texas Press. p. 141. ISBN 0292702981. )
- ^ Burton William Peretti (2009). Lift Every Voice: The History of African American Music. New York: Rowman & Littlefield Publishers. p. 39. ISBN 978-0-7425-5811-3.
- ^ Anna Czekanowska; John Blacking (2006). Polish Folk Music: Slavonic Heritage - Polish Tradition - Contemporary Trends. Cambridge: Cambridge University Press. p. 189. ISBN 978-0-521-02797-7.
- ^ Jeremy Day-O'Connell (2009). "Debussy, Pentatonicism, and the Tonal Tradition" (PDF). Music Theory Spectrum. 31 (2): 225-261.
- ^ Susan Miyo Asai (1999). N?mai Dance Drama, p. 126. ISBN 978-0-313-30698-3.
- ^ Minoru Miki, Marty Regan, Philip Flavin (2008). Composing for Japanese instruments, p. 2. ISBN 978-1-58046-273-0.
- ^ Jeff Todd Titon (1996). Worlds of Music: An Introduction to the Music of the World's Peoples, Shorter Version. Boston: Cengage Learning. ISBN 9780028726120. page 373.
- ^ Anon., "Ditonus", The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell (London: Macmillan Publishers, 2001); Bence Szabolcsi, "Five-Tone Scales and Civilization", Acta Musicologica 15, nos. 1-4 (January-December 1943): pp. 24-34, citation on p. 25.
- ^ Benward & Saker (2003), p. 36.
- ^ Paul Cooper, Perspectives in Music Theory: An Historical-Analytical Approach(New York: Dodd, Mead, 1973), p. 18. . ISBN 0-396-06752-2.
- ^ Steve Khan (2002). Pentatonic Khancepts. Alfred Music Publishing. ISBN 978-0-7579-9447-0. p. 12.
- ^ Ramon Ricker (1999). Pentatonic Scales for Jazz Improvisation. Lebanon, Ind.: Studio P/R, Alfred Publishing Co. ISBN 978-1-4574-9410-9. cites Annie G. Gilchrist (1911). "Note on the Modal System of Gaelic Tunes". Journal of the Folk-Song Society. 4 (16): 150-53. JSTOR 4433969. - via JSTOR (subscription required)
- ^ Ben Johnston, "Scalar Order as a Compositional Resource", Perspectives of New Music 2, no. 2 (Spring-Summer 1964): pp. 56-76. Citation on p. 64 . (subscription required) Accessed 01/04 2009 02:05.
- ^ Leta E. Miller and Fredric Lieberman (Summer 1999). "Lou Harrison and the American Gamelan", p. 158, American Music, Vol. 17, No. 2, pp. 146-78.
- ^ "The representations of slendro and pelog tuning systems in Western notation shown above should not be regarded in any sense as absolute. Not only is it difficult to convey non-Western scales with Western notation..." Jennifer Lindsay, Javanese Gamelan (Oxford and New York: Oxford University Press, 1992), pp. 39-41. ISBN 0-19-588582-1.
- ^ Lindsay (1992), p. 38-39: "Slendro is made up of five equal, or relatively equal, intervals".
- ^ "... in general, no two gamelan sets will have exactly the same tuning, either in pitch or in interval structure. There are no Javanese standard forms of these two tuning systems." Lindsay (1992), pp. 39-41.
- ^ Miller & Lieberman (1999), p. 159.
- ^ Miller & Lieberman (1999), p. 161.
- ^ Japanese Music, Cross-Cultural Communication: World Music, University of Wisconsin - Green Bay.
- ^ Sumarsam (1988) Introduction to Javanese Gamelan.
- ^ Mohamed Diriye Abdullahi (2001). Culture and Customs of Somalia. Greenwood Publishing Group. p. 170. ISBN 0-313-31333-4.
Somali music, a unique kind of music that might be mistaken at first for music from nearby countries such as Ethiopia, the Sudan, or even Arabia, can be recognized by its own tunes and styles.
- ^ Tekle, Amare (1994). Eritrea and Ethiopia: from conflict to cooperation. The Red Sea Press. p. 197. ISBN 0-932415-97-0.
Djibouti, Eritrea, Ethiopia, Somalia and Sudan have significant similarities emanating not only from culture, religion, traditions, history and aspirations[...] They appreciate similar foods and spices, beverages and sweets, fabrics and tapestry, lyrics and music, and jewelry and fragrances.
- ^ Seumas MacNeil and Frank Richardson Piobaireachd and its Interpretation (Edinburgh: John Donald Publishers Ltd, 1996): p. 36. ISBN 0-85976-440-0
- ^ Roderick D. Cannon The Highland Bagpipe and its Music (Edinburgh: John Donald Publishers Ltd, 1995): pp. 36-45. ISBN 0-85976-416-8
- ^ Bruno Nettl, Blackfoot Musical Thought: Comparative Perspectives (Ohio: The Kent State University Press, 1989): p. 43. ISBN 0-87338-370-2.
- ^ "The Pentatonic and Blues Scale". How To Play Blues Guitar. 2008-07-09. Retrieved .
- ^ "NROTC Cadences". Retrieved .
- ^ Steve Turner, Amazing Grace: The Story of America's Most Beloved Song (New York: HarperCollins, 2002): p. 122. ISBN 0-06-000219-0.
- ^ Beth Landis; Polly Carder (1972). The Eclectic Curriculum in American Music Education: Contributions of Dalcroze, Kodaly, and Orff. Washington D.C.: Music Educators National Conference. p. 82. ISBN 978-0-940796-03-4.
- ^ Amanda Long. "Involve Me: Using the Orff Approach within the Elementary Classroom". The Keep. Eastern Illinois University. p. 7. Retrieved 2015.
- ^ Andrea Intveen, Musical Instruments in Anthroposophical Music Therapy with Reference to Rudolf Steiner's Model of the Threefold Human Being
- Jeremy Day-O'Connell, Pentatonicism from the Eighteenth Century to Debussy (Rochester: University of Rochester Press 2007) - the first comprehensive account of the increasing use of the pentatonic scale in 19th-century Western art music, including a catalogue of over 400 musical examples.
- Tr?n V?n Kh, "Le pentatonique est-il universel? Quelques reflexions sur le pentatonisme", The World of Music 19, nos. 1-2:85-91 (1977). English translation: "Is the pentatonic universal? A few reflections on pentatonism" pp. 76-84. - via JSTOR (subscription required)
- Kurt Reinhard, "On the problem of pre-pentatonic scales: particularly the third-second nucleus", Journal of the International Folk Music Council 10 (1958). - via JSTOR (subscription required)
- Yamaguchi Masaya (New York: Charles Colin, 2002; New York: Masaya Music, Revised 2006). Pentatonicism in Jazz: Creative Aspects and Practice. ISBN 0-9676353-1-4
- Jeff Burns, Pentatonic Scales for the Jazz-Rock Keyboardist (Lebanon, Ind.: Houston Publishing, 1997). ISBN 978-0-7935-7679-1. |
|EUROPA: Research Information Centre
Last Update: 2014-04-02 Source: Research Headlines
|View this page online at: http://ec.europa.eu/research/infocentre/article_en.cfm?artid=31819|
Measuring the universe to catch a glimpse of our past
Peering into the very depths of the universe gives scientists a better understanding of its origins. Since the speed of light is finite, the objects we are seeing are from the distant past. A recently completed EU-funded project developed not only a new means of measuring these cosmic distances, but also discovered galaxies at the point of their creation.
It is difficult to comprehend the sheer awe-inspiring enormity of the universe. It takes light eight minutes to travel from the sun to earth and four years for the light from the nearest star to reach our solar system. The light from more distant objects takes even longer to reach us, which in effect means that when we look at distant objects, we are looking at the universe as it was millions, sometimes billions of years ago.
In recent years, one of the most active fields of astronomy has been the study of 'black holes', which are formed when stars collapse at the end of their life cycle and have such a strong gravitational pull that nothing including light can escape. Every galaxy has a supermassive black hole at its centre, which was formed when the galaxy itself was born.
Measuring the universe
The EU-funded BHMASS project, completed in 2012, focused on 'quasars'. These are compact regions that scientists believe are at the very centre of massive galaxies. Project researcher Marianne Vestergaard thinks that the project will enable more accurate measurements of both black-hole masses and cosmic distances to be made in the future. The project also uncovered what could be the first generation of quasars: a cluster with smaller black holes without centrally located hot dust was discovered, indicating that star formation has yet to take place.
Quasars reside at great distances called cosmic distances because they are an early evolutionary phase of galaxies, she explains. So, by studying quasars, their black holes, and how they relate to their environments, we can learn about the physical processes that took place when galaxies were young.
The main aim of the project was to enable the astronomical community to study, at much higher precision than is currently possible, the role that black holes play in shaping the universe. Using quasars, the researchers developed a new method of measuring cosmic distances, independent from existing distance measurements. This breakthrough, claims Vestergaard, could have important implications for astronomy.
Why are distance measurements important? Well, we use cosmic distances to map the universe and its expansion history. A century ago, such measurements told Edwin Hubble that the universe is not static but is expanding. Just 15 years ago, similar precision measurements using supernovae showed us that not only is the universe expanding, but it is expanding at an increasing rate.
Vestergaard says that the specific method developed by BHMASS is significant for several reasons. First, it is important for the astronomical community to have another method to verify important measurements, because each method has its uncertainties, advantages and disadvantages. So the more independent methods we have to make these important measurements, the better.
The second reason is that the method proposed by BHMASS can be applied to much longer distances than the current methods, offering a tool to probe into the very onset of the accelerated expansion and beyond. I think there is a real opportunity for this new method to have quite a lasting impact, continues Vestergaard.
Indeed, the researcher believes that the project has the potential to increase our understanding of how galaxies were formed, by identifying the first generation of quasars. As quasars are born out of the densest concentrations of matter in the early universe, we can learn a lot about the first galaxies by studying these types of systems, she says. Identifying the truly first galaxies is an important first step.
Inspiring future science stars
Research like this can ultimately tell us where we came from and what our future will bring profound questions which occupy us all at some level. More immediately, I hope that these results and the research that follows will bring excitement to the general public about the natural sciences, physics and astrophysics. I also hope it will engage more young people to be curious and motivate them to follow an education within the natural sciences, says Vestergaard.
In addition, the project brought home to her how important it is to think outside the box. That is, to think little crazy thoughts once in a while on what is possible and what is not, she observes. These projects and their results have opened up several new and important research paths that Id like to follow up on, and I hope to be able to obtain additional funding in order to hire PhD students and postdocs to work on these new exciting studies. It has been fun but it isnt over yet! |
WEAVE A TALE ACTIVITY
“Story telling is the most powerful way to put ideas into the world.”
‘Weave a Tale’ activity was conducted for Grade II students. The objective of the activity was to build their confidence, sharpen their presentation skills and enrich their vocabulary. The students came up with striking stories, full of innovative ideas and high level of imaginative skills. This activity gave them a platform to raise their writing skills. The judgement was based on the creativity, time taken to widespread the story and presentation shown by them. Students enjoyed framing up their own stories as this activity provided them an opportunity to ruminate individually. |
Pre-1679: Various Native American tribes, including the Miami, Delaware, and Potawatomi, inhabit the region now known as Indiana.
1679: French explorer René-Robert Cavelier, Sieur de La Salle, claims the region for France, naming it La Louisiane in honor of King Louis XIV.
Late 18th century: The British gain control of the region after the French and Indian War. Native American tribes resist British control.
1783: The Treaty of Paris grants Indiana to the United States, ending British control and making it part of the Northwest Territory.
1800: The Indiana Territory is established, with its capital in Vincennes.
1816: Indiana becomes the 19th state of the United States on December 11.
Early 19th century: Indiana experiences rapid population growth, driven by settlers from the eastern United States. The state becomes known for its fertile farmland and abundant natural resources.
1830s: The forced removal of Native American tribes, particularly the Potawatomi, from Indiana takes place as part of the Indian Removal Act.
Mid-19th century: Indiana's economy expands with the growth of industries such as agriculture, manufacturing, and transportation. The state becomes an important hub for the railroad industry.
1861-1865: Indiana plays a significant role in the American Civil War, providing troops and supplies to the Union Army.
Late 19th century: Indiana experiences industrialization, with the growth of manufacturing, particularly in steel, automobiles, and petroleum products. Cities such as Indianapolis and Gary develop as industrial centers.
Early 20th century: Indiana embraces progressivism, enacting social reforms and improving workers' rights. The state becomes a major automotive manufacturing center.
Mid-20th century: Indiana's economy diversifies further, with the growth of pharmaceuticals, healthcare, and technology sectors. The state also becomes known for its sports culture, particularly basketball and auto racing.
Present: Indiana remains an important manufacturing and agricultural state, with a diverse economy. It is home to cultural landmarks such as the Indianapolis Motor Speedway and the Indiana Dunes National Park.
This timeline provides an overview of the major events in the history of Indiana, from its early Native American inhabitants to its statehood and industrial development. The state's contributions to agriculture, manufacturing, and sports have left a lasting impact on its culture and economy. |
How Do Sails Work: Sailing into the wind
Dinghies can’t sail directly into the wind. Instead, they have to zig-zag into it if they want to travel in that direction.
This all makes Olympic Sailing on the TV very confusing for the none-sailor types!
But it also makes our sport far more strategical than it would be if we could just sail in a straight line everywhere.
So how do we sail into the wind?
The science-y part (sort-of)
What is fascinating is that we can sail into the wind at all. If you’re walking round your local pond and happen to throw a dry & shrivelled leaf into the water the wind will catch it and blow it in the direction of the wind.
How that same wind could be harnessed to allow something to be blown towards it is, on the face of it, a bit mind-boggling!
How boats sail into the wind is hotly debated and the science is often simplified to help us lay-people comprehend it.
But if you had to sum it up in one word that word would be LIFT.
What is lift?
Lift is an invisible force created by air (or water) flowing around the surface of an object (such as an aeroplane wing).
Sails work similarly to aeroplane wings. Both use lift to get where they want to go.
As the pilots among you will know, planes like to take off into the wind. This creates more lift helping them “lift” off.
You can think of a dinghy just like an aeroplane turned on its side. The forces are the same.
The wings on a plane are shaped so one side is more curved than the other. The air particles hit the leading edge of the wing and are split, with some travelling over the longer, curved side and some travelling along the straighter side. This forces the particles travelling on the curved side to travel faster than their counterparts travelling on the straight side.
Then the famous Bernoulli’s law comes into play. In the 1700s Monsieur Bernoulli theorised that this higher speed creates lower pressure on that side of the wing/ sail.
And as higher pressure likes to move towards low pressure it applies a force to the sail. This force is lift.
But why does lift allow you to sail into the wind?
Lift creates a force that sucks the boat towards the curve of the sail. On its own, that force would mostly suck the boat to the side rather than forward.
But dinghies have wings above and below the water. That’s right, your centreboards have a purpose other than to give you something to stand on while righting after a capsize.
As the centreboard travels through the water it also creates lift. Like with the sail, that force is also mostly a sideways force with very little pushing the boat forward.
But here’s where the magic happens… it’s called the “squeezed pip effect’”.
What is the squeezed pip effect?
Imagine holding a slippery lemon pip between your thumb and ring finger. Your fingers are applying equal pressure on both sides of the pip. But as you squeeze, the pip will shoot out in a different direction to the forces applied by your fingers.
In the same way the opposing side-ways forces of the sail and centreboard mean the boat is squeezed forwards.
This is because the sail and board’s side-ways forces cancel each other out leaving a resulting forwards force.
The less science-y part
Ok, science lesson over. Here are the answers to some queries you might have about sailing into the wind.
What happens if you sail too close to the wind?
If a boat sails too close to the wind the sails lose their lift as they start to flap. This disrupts the airflow vital for lift. This angle and any angle closer to the wind is appropriately termed the “No Go Zone”.
What is close-hauled sailing?
Sailing close-hauled is the term used to describe the point of sailing that is the closest angle to the wind that you can sail which still generating maximum lift.
What angle can you sail the wind?
The angle that you can sail towards the wind varies from boat to boat. The range is between about 30 and 50 degrees off the eye of the wind. If you set your sails well or are a multi-sailed keelboat then you’ll be able to point near the 30-degree angle. But if you’re a novice sailing a single sail dinghy you’ll be pointing a lot lower. These differing angles make for interesting handicap racing!
How does tacking help us sail into the wind?
Tacking is simply the term for turning the boat across the no-go-zone. Going from close-hauled on one tack to close-hauled on the other allows us to reach any point into the wind.
Hopefully this helps explain how those expensive pieces of cloth work.
For more dinghy racing tips click here. |
A research team has created nano sponges that can effectively filter organic pollutants from water.
Researchers Changxia Li and Freddy Kleitz from the Faculty of Chemistry at the University of Vienna in Austria built their own material for the filters.
Using a combination of a highly porous covalent organic framework (COF) and graphene they found they could effectively scrub pollutants away.
The new material is efficient in filtering organic pollutants, such as organic dyes, which are usually soluble in water, non-degradable and sometimes even carcinogenic, as reported by the university on August 1, 2022.
The first author of the study and postdoctoral scientist, Changxia Li reported: “There are various ways today, including activated carbon filters, to purify water, but there is still room for improvement in terms of the efficiency or adsorption capacity of the applications.”
According to a university statement, porous materials have a much larger total surface area in comparison to non-porous ones, and can consequently attach particularly large numbers of molecules to the surfaces during adsorption.
COF which are a novel class of particularly porous materials, are furthermore characterized by low density and low weight.
The researchers reported: “We have developed a method to form COF in a comparatively environmentally friendly way using water and were able to use it to design small ‘sponges’ with special pore sizes and pore shapes in the nanometer range as well as a coordinated negative surface charge that very selectively attracts the positively charged target molecules, i.e. our dyes, from the water.
“Just like the sponge absorbs the water, only here it’s the pollutants.”
According to the researchers when using COF powder, the inner pores of the material are no longer available to the pollutants as the outer edge pores are clogged, particularly with large pollutant molecules.
The university stated: “The novel composite material developed offers a consistently permeable structure: To do this, the researchers grew COF on thin nano-layers of graphene.
“The combination of graphene – in itself a 2D layer of carbon atoms – and the up to two nanometer thick layer of COF resulted in a compact, open 3D structure.
“The ultra-thin COF layer could expose more adsorption sites than the loose COF powder.”
In addition, the researchers said: “The large pores of the graphene network in combination with the ultra-thin COF layer and its large number of adsorption sites therefore enable particularly fast and efficient wastewater treatment.”
The relatively small amount of material necessary for graphene and the prospect of reusing the composite material as a filter makes the development of these nano sponges rather inexpensive.
The study was published in the weekly peer-reviewed scientific journal “Angewandte Chemie”. |
Background: Ancient Peru was the seat of several prominent Andean civilizations, most notably that of the Incas whose empire was captured by the Spanish conquistadors in 1533. Peruvian independence was declared in 1821, and remaining Spanish forces defeated in 1824. After a dozen years of military rule, Peru returned to democratic leadership in 1980, but experienced economic problems and the growth of a violent insurgency. President Alberto FUJIMORI's election in 1990 ushered in a decade that saw a dramatic turnaround in the economy and significant progress in curtailing guerrilla activity. Nevertheless, the president's increasing reliance on authoritarian measures and an economic slump in the late 1990s generated mounting dissatisfaction with his regime, which led to his ouster in 2000. A caretaker government oversaw new elections in the spring of 2001, which ushered in Alejandro TOLEDO Manrique as the new head of government - Peru's first democratically elected president of Native American ethnicity. The presidential election of 2006 saw the return of Alan GARCIA Perez who, after a disappointing presidential term from 1985 to 1990, has overseen a robust macroeconomic performance. |
A zettabyte describes a unit of data or information that is equal to 1,000,000,000,000,000,000,000 bytes. The term is equal to a billion terabytes and the symbol ZB represents it. Zebibyte (ZiB) describes a measurement of data using the power of 1024. A single zettabyte is equivalent to 1,024 exabytes and is used prior to the yottabyte unit of measure. Due to the zettabyte’s large size, it is rarely used in industry to describe data storage or network throughput capacity.
Examples of Zettabyte Use
Over the past year, the total amount of global data stored exceeds a single zettabyte of information. To put the size of the term in perspective, the conversations of all human speech ever stored would be approximately 42 zettabytes in size if digitized at 16 khz and 16 bit audio. Another way to look at the sheer volume of a zettabyte would be to examine the amount of data sent on Twitter. If the past year’s worth of twitter data is added up, it would take approximately 100 years of this same data rate to equal a zettabyte of data. With the exponential growth rate of network throughput capacity and data storage technology, however, how consumers perceive a zettabyte today may be similar to how the gigabyte was perceived just 15 years ago.
Got Something To Say: |
Slightly smaller than the American Coot (Fulica americana), the Common Gallinule is most easily identified by its brown back, dark gray breast, and red “shield” on forehead. Other field marks include a yellow-tipped red bill, dull green legs, and white flanks. Male and female Common Gallinules are similar to one another in all seasons. The Common Gallinule breeds in scattered locations throughout the eastern United States and southern Canada, with smaller numbers breeding in the western U.S.In winter, birds breeding in the northeast migrate south and to the coast. Most western birds, as well as those breeding along the coast of the southeastern U.S.and populations breeding further south in Mexico, Central America, South America, and the West Indies, are non-migratory. Common Gallinules breed in relatively deep freshwater or brackish marshes. This species utilizes similar habitat types in winter as in summer. Common Gallinules primarily eat seeds and other plant matter, but may also eat snails and other plant matter, especially during the warmer months. Common Gallinules may be observed feeding by picking seeds off of the surface or by submerging their heads to feed on underwater plants. This species may also be observed walking on the shore or running along the surface of the water while attempting to become airborne. Common Gallinules are primarily active during the day; however, this species does migrate at night.Rights Holder
: UnknownBibliographic Citation
: Rumelt, Reid B. Gallinula galeata. June-July 2012. Brief natural history summary of Gallinula galeata. Smithsonian's National Museum of Natural History, Washington, D.C. |
This is an independent activity to practice and maintain maths knowledge and skills. It is a simple activity where dice are rolled and the numbers on the dice are added, subtracted, multiplied or divided together. It can be easily adjusted to suit all ability levels.
A beginning of the year unit of work to find out about our class. Its main focus is statistics with number and measurement part of the unit as well. It also provides opportunity for digital technology, health and PE and literacy.
For this unit students are statisticians and researchers finding interesting things about their class. Data is collected and analysed with predictions and questions investigated and communicated. |
We are now using the Sir Linkalot app to help children to practise spellings.
Each day, pupils will have access to daily Sir Linkalot time at school. At home, they can access this too! (see below). Together, we will build up their bank of tricky words, homophones and spelling conventions. Pupils will learn mnemonics, rhymes and patterns, alongside other spelling strategies, to recall and retain these words. They also explore words at greater depth, looking at the morphology and etymology of words.
Pupils in Y5 will learn 10 words per week. They will be assessed every Friday. Your child will be told they are either Jupiter or Mars spellings (available in the usual same spot on our Class Page.
For LOGIN Details, see image below:
Spelling Practice Activities
How to work out tricky words Spelling play - route to spelling interactive resource |
As families around the country shelter in place, parents of preschoolers can help build their child’s speech and language skills during everyday activities at home.
Strong speech and language skills are key to kindergarten readiness and a precursor for reading, writing, and social success. Below are some key communication skills for children ages 3–5, and suggestions for how parents can help their preschoolers:
Teach or reinforce ways to follow directions throughout the day. Get your child’s attention, make sure they are looking at you, and go over the steps you take when getting dressed, washing hands, brushing teeth, or cleaning up toys. You can even create a picture or sign with the list of steps for common daily tasks. Some easy at-home practice opportunities include the following:
Young children love music. Singing nursery rhyme songs like Row, Row, Row Your Boat and Wheels on the Bus teaches them about different sounds and words. Singing songs and hearing rhymes will help children learn to read.
The more words a child is exposed to, the more words they’ll know! Keep the conversation going all day long, regardless of your activity. Some great vocabulary-building opportunities include the following:
Set the stage for a story by naming a place, character(s), and activity. Encourage your child to create a story from those details and to make up adventures for each character. The funnier or wilder, the better.
You can also pick a familiar book and have them describe how the characters feel. Magazines and newspapers are also great for this purpose. Make up a story about a picture and describe what happens. Role-play the stories by pretending to be the characters.
Help children to express their own feelings and to talk about how others might be feeling. Some ideas include the following:
Sequencing is breaking down something (e.g., a task or story) into steps or parts—and then putting them in a logical order. Ask your child to select a favorite book. Read it together, and then talk about it. What came first? Next? Last? Have them draw a picture to show you. As you read, you can also ask them what they think will happen next—or what they think the story is about before you read by looking at the cover (this is called prediction).
Children use many tactics to get their way. Although these tactics may include crying or whining, you can help them learn to persuade with their words. Have them draw a picture of their favorite book and tell you about it—are they able to convince you to read it? Or if they want to watch a TV show or movie, ask them to persuade you—to give you good reasons why they should get to watch the show.
Schedule a call (or video chat, if you can) between your child and their grandparent, other family members, or a friend to talk about their daily activities or a book they’ve read. Can your child talk briefly about the highlights of the day or the main events in a book? |
When seizures are determined to be caused by epilepsy, the first line of treatment is usually medication. There are more than 20 different anti-seizure medicines available. Some may work better for certain types of seizures than for others, and all have side effects. (1,2)
The goal is to strike a balance between the upside of fewer seizures — and better quality of life — and the downside of bothersome medication side effects.
If medication proves ineffective at controlling seizures, other treatments may be required, such as epilepsy surgery, dietary changes, vagus nerve stimulation (VNS) therapy, or responsive neurostimulation. (3)
But before you stop taking anti-seizure medication for any reason — including continued seizures, unacceptable side effects, or any other reason — talk to your doctor about stopping the drug or changing therapies. Don’t stop an anti-seizure drug on your own.
Normally, anti-seizure drugs are tapered — taken in progressively smaller doses — before they are stopped entirely. Abruptly stopping a medication raises the risk of withdrawal seizures.
Medication for Epilepsy
Usually, a person with epilepsy will be started on one medication (monotherapy) at a low dose, and then the dosage will be gradually increased to find the proper dose for that person. This is done to try to minimize side effects. Almost half of people with epilepsy become seizure-free with monotherapy.
Side effects from anti-seizure drugs (also called anti-epileptic drugs, or AEDs) are common, often leading to a reduced quality of life in people with epilepsy. Drowsiness, dizziness, double vision (diplopia), and impaired balance are common problems with all classes of anti-seizure medication.
Other side effects are more specific to individual drugs, but common side effects can include difficulty concentrating, nausea, tremors, rash, weight gain or loss, and suicidal thoughts.
Some people are eventually able to stop anti-seizure medication, but the ability to do so varies with age and type of seizure. One study showed that 75 percent of people who had been seizure-free for three years could discontinue medication without having more seizures. (1)
For about 1 out of 3 people with epilepsy, seizures are not controlled by medication. These people are referred to as having drug-resistant or “refractory” seizures.
Because using multiple anti-seizure drugs can lead to severe side effects, other treatments are often tried for refractory seizures.
Broad-Spectrum Anti-Seizure Drugs
These drugs are used to treat a broad range of seizure types, including both focal and generalized onset seizures:
- rufinamide (Banzel)
- brivaracetam (Briviact)
- valproic acid (Depakene)
- felbamate (Felbatol)
- perampanel (Fycompa)
- levetiracetam (Keppra)
- lamotrigine (Lamictal)
- clobazam (Onfi)
- topiramate (Topamax)
- zonisamide (Zonegran)
Narrow-Spectrum Anti-Seizure Drugs for Focal Seizures
These drugs are used for focal seizures, even if they evolve to generalized seizures:
- eslicarbazepine (Aptiom)
- phenytoin (Dilantin)
- tiagabine (Gabitril)
- phenobarbital (Luminal)
- pregabalin (Lyrica)
- gabapentin (Neurontin)
- vigabatrin (Sabril)
- carbamazepine (Tegretol)
- oxcarbazepine (Trileptal)
- lacosamide (Vimpat)
Narrow-Spectrum Anti-Seizure Drugs for Generalized Absence Seizures
The drug ethosuximide (Zarontin) is used for absence seizures only.
Cannabis-Based Anti-Seizure Medication
Cannabidiol (Epidiolex), a medication made from cannabidiol (CBD), a chemical present in the Cannabis sativa (marijuana) plant, was approved in 2018 by the Food and Drug Administration (FDA).
CBD is not the chemical in marijuana that produces the “high”; that’s tetrahydrocannabinol (THC).
As of July 2018, cannabidiol has been approved for refractory seizures in patients older than 2 years caused by the childhood epilepsy conditions Dravet syndrome and Lennox-Gastaut syndrome. It is the first cannabis-based medication to be approved by the FDA, and also the first medication approved for Dravet syndrome. (4)
Surgeries for Epilepsy
Epilepsy surgery is the only option with the potential to cure refractory seizures, but some people may not be good candidates for surgery, or they may not want surgery.
Any kind of surgery carries a level of risk, and brain surgeries can cause damage to surrounding tissue that can cause changes to a person’s cognitive (thinking) ability, or even to their personality.
Brain surgeries for epilepsy are usually only considered if the person has tried and not seen improvement from at least two anti-seizure drugs, and if there is an identifiable cause of the seizures.
There are a few main types of brain surgery for epilepsy: focal resection, corpus callosotomy, multiple subpial transection, and hemispherectomy. (5,6)
Focal Resection Also known as lobectomy and lesionectomy, focal resection is the removal of the section of the brain where the seizures originate. This type of surgery is most likely to be successful at stopping seizures if doctors have identified a small and precise area of the brain where seizures originate, called the seizure focus.
Corpus Callosotomy This type of surgery involves cutting the connections between the right and left halves (hemispheres) of the brain. Because generalized seizures are often focal (partial) seizures that then spread to both hemispheres of the brain, this surgery effectively keeps the seizure in the half of the brain where it started, so that only half the body is affected. Still, some people experience a worsening of focal seizures after this procedure.
Multiple Subpial Transection In this type of surgery, multiple cuts are made into the brain tissue to disrupt the electrical transmissions that cause seizures. This kind of surgery is performed if the seizing part of the brain cannot be removed.
Hemispherectomy and Hemispherotomy In an anatomical hemispherectomy, the affected brain hemisphere is surgically removed. In a functional hemispherectomy, less brain tissue is removed, and the remaining brain is disconnected from the other hemisphere (as in corpus callosotomy). In a hemispherotomy, even less brain tissue is removed, and the affected brain is disconnected from the healthy brain.
If surgery is the appropriate course of treatment, experts recommend that it be performed sooner rather than later. Surgery can be an important and, some say, underutilized, treatment for people with drug-resistant focal epilepsy.
Dietary Changes Recommended for Epilepsy
Some dietary changes have been found helpful in reducing seizures. Most of them involve decreasing the amount of carbohydrates (sugars and starches) in the diet. It’s not yet clear how these diets help to reduce seizures.
Examples of diets tried for epilepsy include:
- The ketogenic diet
- The medium-chain triglyceride diet
- The modified Atkins diet
- Low-glycemic-index diet
Reducing carbohydrate intake causes the body to burn more fat for energy, and when the body burns fat, acids called ketones are produced. Having a higher-than-normal amount of ketones in the bloodstream is known as ketosis. As long as there’s enough insulin available, the body can use ketones for energy.
Low-carbohydrate diets are also associated with lower glucose and insulin levels in the blood.
Researchers are unsure whether seizure improvement comes about because of changes caused by ketosis, by the presence of more fatty acids in the blood, or because there are fewer fluctuations in blood glucose levels.
All of these diets are best learned under the care of a physician and nutritionist. Strict diets like the ketogenic diet may begin with a brief admission to the hospital for monitoring and teaching, followed by ongoing assessment of laboratory levels. (7)
Implanted Devices Used in Epilepsy Treatment
Implanted nerve stimulation devices represent another option for treating seizures that are not controlled with medication.
Vagus Nerve Stimulator Vagus nerve stimulators were approved by the FDA in 1997. The device is surgically implanted under the skin of the chest, and electrodes connect the device to the left vagus nerve in the neck. The device sends regular pulses of electricity to the brain to control abnormal electrical activity in the brain. Although vagus nerve stimulation may reduce seizures by 20 to 40 percent, people who use them will usually also need to keep taking medication. (3)
Responsive Neurostimulation In responsive neurostimulation, a closed-loop system analyzes brain activity patterns and then delivers a shock if it detects that a seizure is coming. One of the first such systems, the NeuroPace, was approved by the FDA in late 2013. The battery-powered device is surgically implanted in the skull, and wires connected to the device are placed on the surface of the brain or inside the brain area where seizures originate. (8)
Deep Brain Stimulation (DBS) The Medtronic DBS System for Epilepsy was approved by the FDA in April 2018. The pulse generator portion of the device is implanted in the chest, and two wires lead to a seizure focus in the brain. The device controls seizures by delivering ongoing electrical pulses to that area. The device is used in adults with focal seizures who have more than six seizures a month and who have not seen good results with at least three drugs. Certain medical procedures, including magnetic resonance imaging (MRI), cannot be performed with a DBS system in place, or permanent brain damage can occur. (9)
Editorial Sources and Fact-Checking
- The Epilepsies and Seizures: Hope Through Research. National Institute of Neurological Disorders and Stroke. July 25, 2022.
- Initial Treatment of Epilepsy in Adults. UpToDate.com. August 2022.
- Vagus Nerve Stimulation (VNS) Therapy. Epilepsy Foundation. March 12, 2018.
- FDA Approves First Drug Comprised of an Active Ingredient Derived From Marijuana to Treat Rare, Severe Forms of Epilepsy. U.S. Food and Drug Administration. June 25, 2018.
- Epilepsy Surgery. Mayo Clinic. January 8, 2021.
- Types of Epilepsy Surgery. Epilepsy Foundation. October 15, 2018.
- Ketogenic Diet. Epilepsy Society. April 2019.
- Epilepsy. UCSF Weill Institute for Neurosciences.
- FDA Approval: Medtronic Deep Brain Stimulation for Medically Refractory Epilepsy. Epilepsy Foundation. May 1, 2018. |
A Perfect Day In Spring, from Songs For EVERY Spring Assembly, celebrates the fresh beauty of springtime and is great fun to sing! This cheerful song encourages a multi-sensory appreciation of the natural world and energizes the senses.
The feel-good factor of this song goes beyond the joyful lyrics and bright melody. Singing A Perfect Day In Spring also creates a great opportunity for practising mindfulness and gratitude by taking time to notice and appreciate nature. The benefits of this are widely reported: research shows that gratitude reduces anxiety and that cultivating an ongoing attitude of thankfulness can protect and stabilize good mental health in young minds; mindfulness is the intentional act of focussing on the present in a gentle and non-judgemental way, with numerous studies showing that this can help with stress, anxiety and depression. Exercising the senses is a great way of practising mindfulness and makes this concept accessible for children of all ages.
You could use this song to introduce the idea of mindfulness and gratitude, then get the children outside for a multi-sensory exercise. Taking cues from the lyrics, children can be guided through the ways that we can experience spring through our senses:
Start with a simple breathing exercise to calm busy minds and bodies. Ask the children to draw a figure of eight (on its side) slowly with their finger. On the first loop, breathe in; on the second loop, breathe out. Repeat five times.
Ask the children to take three deep breaths and really notice what they can smell. After hearing some of their answers aloud, repeat this exercise and see if they can notice a new scent that someone else has mentioned.
Use a percussion instrument such as a triangle or miniature symbol to indicate the start and end of a time of listening that is an achievable length for the age group you are working with. Find out what the children heard during this time and talk about these sounds.
Ask the children to find a space and look around them for a moment. Can they come up with three things that they can see in nature and describe the colours?
Find some flowers, trees or even grass that the children can touch. Ask them to describe how these things feel – are they rough or smooth, cold or warm, strong or soft? Can they spot a detail that they’ve never noticed before?
Ask the children to notice how they are feeling after this mindfulness exercise. Can they name one thing they are grateful for?
Pausing to recognize what the senses are experiencing is an exercise that children can do anywhere at any time. It equips them with a powerful tool to combat stress and anxiety – a tool that they can continue using throughout their lives. It teaches them that making time to stop and check in with their bodies, thoughts and feelings is valuable, and that connecting with the natural world around us feels good.
As an expansion activity, children could have a go at these spring-themed painted pebbles. This pebble could become a prop for mindfulness exercises. Holding a pebble provides a tangible item to focus on, bringing attention to the senses, starting with touch. Having the pebbles in the classroom or at home can also act as a physical reminder to practise mindfulness.
Even without these mindfulness exercises, simply singing A Perfect Day In Spring combines the stress-busting power of gratitude, the mood-boosting effects of nature and the feel-good factor of singing together, releasing endorphins that are sure to lift the spirits of children and adults alike!Find out more about our Songs For EVERY Spring Assembly here.
Keep up to date with all the latest from Out of the Ark Music by signing up to our eNewsletter and subscribing to our blog here. |
Unusually low temperatures in the Arctic ozone layer have recently initiated massive ozone depletion. The Arctic appears to be heading for a record loss of this trace gas that protects Earth's surface against ultraviolet radiation from the sun. This result has been found by measurements carried out by an international network of over 30 ozone sounding stations spread all over the Arctic and Subarctic and coordinated by the Potsdam Research Unit of the Alfred Wegener Institute for Polar and Marine Research in the Helmholtz Association (AWI) in Germany.
"Our measurements show that at the relevant altitudes about half of the ozone that was present above the Arctic has been destroyed over the past weeks," says AWI researcher Markus Rex, describing the current situation. "Since the conditions leading to this unusually rapid ozone depletion continue to prevail, we expect further depletion to occur." The changes observed at present may also have an impact outside the thinly populated Arctic. Air masses exposed to ozone loss above the Arctic tend to drift southwards later. Hence, due to reduced UV protection by the severely thinned ozone layer, episodes of high UV intensity may also occur in middle latitudes. "Special attention should thus be devoted to sufficient UV protection in spring this year," recommends Rex.
Ozone is lost when breakdown products of anthropogenic chlorofluorocarbons (CFCs) are turned into aggressive, ozone destroying substances during exposure to extremely cold conditions. For several years now scientists have pointed to a connection between ozone loss and climate change, and particularly to the fact that in the Arctic stratosphere at about 20 km altitude, where the ozone layer is, the coldest winters seem to have been getting colder and leading to larger ozone losses. "The current winter is a continuation of this development, which may indeed be connected to global warming," atmosphere researcher Rex explains the connection that appears paradoxical only at first glance. "To put it in a simplified manner, increasing greenhouse gas concentrations retain Earth's thermal radiation at lower layers of the atmosphere, thus heating up these layers. Less of the heat radiation reaches the stratosphere, intensifying the cooling effect there." This cooling takes place in the ozone layer and can contribute to larger ozone depletion.
"However, the complicated details of the interactions between the ozone layer and climate change haven't been completely understood yet and are the subject of current research projects," states Rex. The European Union finances this work in the RECONCILE project, a research programme supported with 3.5 million euros in which 16 research institutions from eight European countries are working towards improved understanding of the Arctic ozone layer.
In the long term the ozone layer will recover thanks to extensive environmental policy measures enacted for its protection. This winter's likely record-breaking ozone loss does not alter this expectation. "By virtue of the long-term effect of the Montreal Protocol, significant ozone destruction will no longer occur during the second half of this century," explains Rex. The Montreal Protocol is an international treaty adopted under the UN umbrella in 1987 to protect the ozone layer and for all practical purposes bans the production of ozone-depleting chlorofluorocarbons (CFCs) worldwide today. CFCs released during prior decades however, will not vanish from the atmosphere until many decades from now. Until that time the fate of the Arctic ozone layer essentially depends on the temperature in the stratosphere at an altitude of around 20 km and is thus linked to the development of earth's climate.
Cite This Page: |
Dating to roughly 8200 BCE, the Olsen-Chubbuck Bison Kill Site in Cheyenne County preserves evidence of a Paleo-Indian kill of more than 190 bison. The site was named for the amateur archaeologists Jerry Chubbuck and Sigurd Olsen, who discovered and partially excavated the site in 1957–58 before turning over excavations to a University of Colorado Museum team headed by archaeologist Joe Ben Wheat. The mass kill preserved at the site demonstrates techniques that Native Americans used to hunt bison on the plains for more than 10,000 years.
Discovery and Excavation
The Olsen-Chubbuck Site is in an old arroyo about thirteen miles southwest of Cheyenne Wells and sixteen miles southeast of Kit Carson in Cheyenne County. It lay under land owned by rancher Paul Forward until the late 1950s, when erosion caused by several years of drought revealed a clear outcropping of bones. On December 8, 1957, Jerry Chubbuck noticed the outcropping while driving by. A quick investigation yielded a Paleo-Indian projectile point and an end-scraper. Chubbuck notified Joe Ben Wheat of his find, but Wheat was busy with another dig and could not immediately visit the site. In the meantime, Chubbuck and Sigurd Olsen of nearby Kit Carson began to excavate. In addition to the bone bed, which soon yielded fifty skulls, they found human artifacts including two dozen projectile points or point fragments and several stone tools.
In April 1958 Wheat visited the site, recognized its importance, and asked Chubbuck and Olsen to relinquish their digging permit. They did, allowing Wheat and the University of Colorado Museum to excavate the site thoroughly in the summers of 1958 and 1960.
As the excavation proceeded, the shape and extent of the bison bone bed gradually became clear. The bone bed occupied an old arroyo channel that cut across the locally normal drainage pattern and was probably formed from an eroded bison trail. At its narrow end it was one to three feet wide and one to three feet deep, and it grew to maximum dimensions of about fifteen feet wide and seven feet deep. The bone bed stretched for roughly 170 feet within the arroyo, with an average width of five feet and a maximum depth of six and a half feet. The bones in the arroyo had apparently created a natural dam that trapped sediments in runoff water until the bones were completely covered and the arroyo filled in.
Killing and Carving Techniques
Wheat called the arroyo “a puny trap for a bison herd,” but it seems clear that Paleo-Indians used the arroyo as a natural trap for a stampeding herd of the extinct species Bison occidentalis, which was about 25 percent to 33 percent larger than modern bison. Based on bone orientation in the arroyo, the herd was driven from northwest to southeast. When the herd hit the arroyo, many fell in and were trampled or suffocated. The Paleo-Indians would have killed any surviving bison still trapped in the arroyo. Skeletal remains indicate that the kill probably occurred in late May or early June.
At least 190 bison died in the arroyo. After the successful kill, the Paleo-Indians responsible for the stampede butchered the bodies. The butchering process was clearly organized, resulting in nine distinct piles of bones arranged in the order the animals were butchered: front-leg units on the bottom, then pelvic-girdle units, rear-leg units, and vertebral-column units, with skulls on top. Below the butchered piles were layers of complete and partially complete skeletons at the bottom of the arroyo, too deep for the Native Americans to extract.
Wheat used evidence from the kill to estimate the size of the Paleo-Indian group involved in the attack. He calculated that the animals killed and butchered would have yielded about 60,000 pounds of meat plus an additional 9,000 pounds of tallow and internal organs. To butcher the animals in a timely fashion, eat some of the meat, and carry the rest would have required a group of 150–200 people. If the group had dogs to help consume and carry the meat, its size could have been smaller, perhaps 75–100 people and 100 dogs. Wheat classified the people involved with the kill as the Firstview complex, named after a small town just north of the site. |
Throughout the 1700s, the turnpike system spread throughout Britain, charging travellers a toll (fee) at different points along its roads to pay for maintenance and improvement. This tollhouse is in Oborne, Dorset. Often situated in isolated areas of the country, the toll collectors needed a good view from these houses in case of attack from thieves or protestors.
The advent of steam hauled railways in the 1820s quickly revolutionised passenger travel and the transport of goods across Britain and the wider world. This is an early train ticket for a journey from Liverpool to Warrington.
John McAdam revolutionised road travel in the 1800s, through his ‘Macadamisation’ method. The greatest advance in road construction since Roman times, his principles are still applied to road building today.
Travelling in Europe was very popular among the British nobility, gentry, and professionals of the 1700s and 1800s. It became traditional for upper class men and women to embark on a lengthy ‘Grand Tour’ of Europe, where they would experience the languages and history of the continent while showing off their own status and wealth. It was also popular with British artists, writers and thinkers of the time, keen to broaden their experience and exchange ideas – particularly with their counterparts and the new celebrities and centres thrown up by the upheavals of revolution.
In 1789, a coal ship named Adventure ran aground at the mouth of the River Tyne during a violent storm. The sea was too rough for the local boats and nothing could be done to save the thirteen-man crew. This tragic loss prompted a competition to design a new type of boat, that could carry 24 people and was suitable for rescues in rough and stormy seas. The result was the first ‘Life-Boat’.
This is a painting of an Australian kangaroo by the artist George Stubbs. This was the first time people in Britain had seen such a creature.
This is the first steam powered railway engine to run on a public railway. It was designed by George Stephenson and sparked a transport revolution that transformed the lives and fortunes of people across Britain and the wider world.
Isambard Kingdom Brunel’s SS Great Britain is one of Britain’s most important ships. By combining size, power and innovative technology, Brunel revolutionised sea travel and paved the way for modern ship design.
In the late 1700s Captain James Cook (1729 – 1779) led three now legendary voyages to explore the Pacific Ocean in ships named Endeavour, Resolution, Adventure and Discovery. To people living in Britain at the time, the Pacific was as mysterious and unreachable as outer space is to ordinary people today. With his crew Cook voyaged further south than any European before him and brought back new knowledge to Britain of the seas, lands, peoples, plants and animals they encountered. The three voyages transformed knowledge and understanding among Europeans about the wider world and its people.
In 1805 large tracts of the continent of Africa remained unknown in Britain and Europe. Mapmakers were quick to draw on new information, newly sophisticated measuring instruments, and new consumer interest in the wider world. Their efforts produced maps that were more scientific and objective on one level, but remained deeply coloured by their biases and preconceptions. |
In order for a child being happy and develop well, the role that parents play is extremely critical. They are your children’s first heroes and friends. Children learn almost all of their first lessons and concepts of the world around them from their parents. This is why parents should comprehend the methods to take their children’s creativity and mental abilities to positive use. Among the easiest approaches to do this is through encouraging the children in coloring at the young age. Children who begin coloring when young have fewer mental problems than others who don’t. They are also better writers and artists, have fuller imaginations, and learn important life lessons and values more easily.
Children who’ve a powerful and active imagination possess a powerful tool that will aid them well throughout their lives. Coloring books and coloring pages are a terrific tool for getting were only available in completing this task task. This is because such coloring pages develop and encourage the creativity lying latent inside child. Children are capable to imagine the way a picture might try looking in different color combinations, this also simple act has the power to produce a powerful and flexible mind.
Besides helping the children to develop their imaginations, parents can utilize stories from the coloring pages to show their kids practical lessons. While your children are experiencing and enjoying the technique of coloring the pictures, they may be taught values which are important for them to grasp at a young age. Lessons in the real-world can even be taught simultaneously. When parents take the time to tell their children stories as they color, the infant’s imagination is further strengthened and enhanced.
Many parents already know that coloring pages help develop their children’s artistic abilities. They may stop acquainted with the ways that using coloring pages might help their little ones to boost their skills towards writing. As children practice coloring, their abilities to stay inside lines improve over time. This focused capacity to control the pencils or crayons precisely is a valuable part for being in a position to hold a writing instrument still and steady when ever it’s draw the letters with the alphabet. Artistic abilities are cultivated and strengthened in children that do well at coloring pages. Besides this, they’ll think it is much easier to start writing their alphabet letters once the time for this comes.
Children who use coloring pages will likely better their power to concentrate. Being in a position to concentrate is an important skill for kids to have, along with the earlier they start developing it, the higher. Focusing on a drawing on a webpage does much for youngsters. Children learn to be patient as they not rush to make use of colors to the images inside the coloring book. Problems with hyperactivity and attention deficit disorders, and also add hyperactivity disorder, also referred to as ADHD, are diminished, psychologists believe, in children who devote a good portion of their time to coloring pages.
Finally, children who will be linked to coloring pages will in all probability experience fewer psychological problems when they are young. The reason just for this is that the minds of youngsters, who’re able to enjoy utilizing their imaginations to produce exciting worlds full of fantasy and adventure, are strong and flexible. This coloring creativity aids them in avoiding problems like childhood depression.
The vast amounts of benefits to children practicing on coloring pages simply can ‘t be overstated coming from a psychological viewpoint. It’s important to start children on easy images to be able to experience feeling of achievement. As their coloring skill improves, they are often given more complex patterns and images to color. Giving children books and pages to color can be a noteworthy approach to help them to become better artists and writers, to enable these to concentrate better, to lessen the likelihood that they will suffer from mental problems, and to teach them important life lessons and values. Children who begin coloring in a young age will relish the benefits it may bestow for many years. |
You may encounter situations in which you have a three-dimensional solid shape and need to figure out the area of an imaginary plane inserted through the shape and having borders defined by the boundaries of the solid.
For example, if you had a cylindrical pipe running under your home measuring 20 meters (m) in length and 0.15 m across, you might want to know the cross-sectional area of the pipe.
Cross sections can be perpendicular to the orientation of the axes of the solid if any exist. In the case of a sphere, any cutting plane through the sphere regardless of orientation will result in a disk of some size.
The area of the cross-section depends on the shape of the solid determining the cross-section's boundaries and the angle between the solid's axis of symmetry (if any) and the plane that creates the cross section.
Cross-Sectional Area of a Rectangular Solid
The volume of any rectangular solid, including a cube, is the area of its base (length times width) multiplied by its height: V = l × w × h.
Therefore, if a cross section is parallel to the top or bottom of the solid, the area of the cross-section is l × w. If the cutting plane is parallel to one of the two sets the sides, the cross-sectional area is instead given by l × h or w × h.
If the cross-section is not perpendicular to any axis of symmetry, the shape created may be a triangle (if placed through a corner of the solid) or even a hexagon.
Example: Calculate the cross-sectional area of a plane perpendicular to the base of a cube with a volume of 27 m3.
Since l = w = h for a cube, any one edge of the cube must be 3 m long (since 3
× 3 = 27). A cross-section of the type described would therefore be a square 3 m on a side, giving an area of 9 m2.
Cross-Sectional Area of a Cylinder
A cylinder is a solid created by extending a circle through space perpendicular to its diameter. The area of a circle is given by the formula πr2, where r is the radius. It therefore makes sense that the volume of a cylinder would be the area of one of the circles forming its base.
If the cross-section is parallel to the axis of symmetry, then the area of the cross-section is simply a circle with an area of πr2. If the cutting plane is inserted at a different angle, the shape generated is an ellipse. The area uses the corresponding formula: πab (where a is the longest distance from the center of the ellipse to the edge, and b is the shortest).
Example: What is the cross-sectional area of the pipe under your home described in the introduction?
This is just πr2 = π(0.15 m)2=
π(0.0225) m2 = 0.071 m2. Note that the length of the pipe is irrelevant to this calculation.
Cross-Sectional Area of a Sphere
Any theoretical plane placed through a sphere will result in a circle (think about this for a few moments). If you know either the diameter or the circumference of the circle the cross-section forms, you can use the relationships C = 2πr and A = πr2 to obtain a solution.
Example: A plane is rudely inserted through the Earth very close to the North Pole, removing a section of the planet 10 m around. What is the cross-sectional area of this chilly slice of Earth?
- Since C = 2πr = 10 m, r = 10/2π = 1.59 m; A = πr2= π(1.59)2= 7.96 m2. |
Blue King Crab
Did You Know?
King crabs are not true crabs like Dungeness or snow crab, but are more closely related to hermit crabs.
Blue king crab, like all king crabs are decapod or “ten-legged” crustaceans that have "tails," or abdomens, that are distinctive, being fan-shaped and tucked underneath the rear of the shell. They also have five pairs of legs; the first bears their claws or pincers, the right claw is usually the largest on the adults, the next three pairs are their walking legs, and the fifth pair of legs are small and normally tucked underneath the rear portion of their carapace (the shell covering their back). These specialized legs are used by adult females to clean their embryos (fertilized eggs) and the male uses them to transfer sperm to the female during mating.
Growth and Reproduction
Blue king crab are similar in size and appearance, except for color, to the more widespread red king crab, but are typically biennial spawners with lesser fecundity and somewhat larger sized eggs. It may not be possible for large female blue king crabs to support the energy requirements for annual ovary development, growth, and egg extrusion due to limitations imposed by their habitat, such as poor quality or low abundance of food or reduced feeding activity due to cold water. Both the large size reached by blue king crab and the generally high productivity of the Pribilof and St Matthew island areas, however, argue against such environmental constraints. Development of the fertilized embryos occurs in the egg cases attached to the pleopods beneath the abdomen of the female crab and hatching occurs February through April. After larvae are released, large female blue king crab will molt, mate, and extrude their clutches the following year in late March through mid April.
Female crabs require an average of 29 days to release larvae, and release an average of about 110,000 larvae. Larvae are pelagic and pass through four zoeal larval stages which last about 10 days each, with length of time being dependent on temperature; the colder the temperature the slower the development and vice versa. Stage I zoeae must find food within 60 hours as starvation reduces their ability to capture prey and successfully molt. Zoeae consume phytoplankton, the diatom Thalassiosira spp. in particular, and zooplankton. The fifth larval stage is the non-feeding and transitional glaucothoe stage in which the larvae take on the shape of a small crab but retain the ability to swim by using their extended abdomen as a tail. This is the stage at which the larvae searches for appropriate settling substrate, and once finding it, molts to the first juvenile stage and henceforth remains benthic. The larval stage is estimated to last for 2.5 to 4 months and larvae metamorphose and settle during July through early September.
Blue king crab molt frequently as juveniles, growing a few millimeters in size with each molt. Unlike red king crab juveniles, blue king crab juveniles are not known to form pods. Female king crabs typically reach sexual maturity at approximately five years of age while males may reach maturity one year later, at six years of age.
Longevity is unknown for the species, due to the absence of hard parts retained through molts with which to age crabs. Estimates of 20 to 30 years in age have been suggested.
Food eaten by king crabs varies by species, size, and depth inhabited. King crabs are known to eat a wide assortment of marine life including worms, clams, mussels, snails, brittle stars, sea stars, sea urchins, sand dollars, barnacles, crabs, other crustaceans, fish parts, sponges, and algae.
King crabs are eaten by a wide variety of organisms including but not limited to fishes (Pacific cod, sculpins, halibut, yellowfin sole), octopuses, king crabs (they can be cannibalistic), sea otters, and several new species of nemertean worms, which have been found to eat king crab embryos.
Adult blue king crabs exhibit nearshore to offshore (or shallow to deep) and back, annual migrations. They come to shallow water in late winter and by spring the female's embryos hatch. Adult females and some adult males molt and mate before they start their offshore feeding migration to deeper waters. Adult crabs tend to segregate by sex off the mating-molting grounds. Red, blue, and golden king crabs are seldom found co-existing with one another even though the depth ranges they live in and habitats may overlap.
Range and Habitat
Blue king crab are anomurans in the family Lithodidae which also includes the red king crab Paralithodes camtschaticus and golden or brown king crab Lithodes aequispinus in Alaska. Blue king crabs occur off Hokkaido in Japan, with disjunct populations occurring in the Sea of Okhotsk and along the Siberian coast to the Bering Straits. In North America, they are known from the Diomede Islands, Point Hope, outer Kotzebue Sound, King Island, and the outer parts of Norton Sound. In the remainder of the Bering Sea, they are found in the waters off St. Matthew Island and the Pribilof Islands. In more southerly areas as far as southeastern Alaska in the Gulf of Alaska, blue king crabs are found in widely-separated populations that are frequently associated with fjord-like bays. This disjunct, insular distribution of blue king crab relative to the similar but more broadly distributed red king crab is likely the result of post-glacial period increases in water temperature that have limited the distribution of this cold-water adapted species. Factors that may be directly responsible for limiting the distribution include the physiological requirements for reproduction, competition with the more warm-water adapted red king crab, exclusion by warm-water predators, or habitat requirements for settlement of larvae.
Status, Trends, and Threats
The two major populations of blue king crab in Alaska are in the Pribilof Islands and in St. Matthew Island areas. Since the early 1980s abundance of the Pribilof Island population has peaked during the early 1980s and the mid-1990s, but has been at a fairly low level ever since. This population is too low at present to support a directed commercial fishery. The St. Matthew Island population also experienced peaks in abundance during the early 1980s and mid-1990s, but is increasing at the present. Recent increases in this population have allowed for a directed commercial fishery in 2009 and again in 2010.
The Pribilof Island population is currently experiencing low abundance relative to peaks in abundance in the early 1980s and mid-1990s. The St. Matthew Island population is currently trending upwards after ten years of low abundance.
No known threats at this time although there is concern for the continued low abundance of Pribilof Island blue king crab, despite no directed commercial harvest since 1999.
Up to 18 pounds for a mature male.
Disjunct populations in the North Pacific Ocean, with major concentrations primarily in Bering Sea.
Wide assortment of invertebrates including worms, clams, mussels, snails, brittle stars, sea stars, sea urchins, sand dollars, barnacles, crabs, other crustaceans, fish parts, sponges, and algae.
A wide variety of marine fishes, king crab, and octopus.
Biennial spawning, embryos held in egg cases attached to the abdomen.
Bering Sea populations off of Alaska are managed jointly by the Alaska Department of Fish and Game and the National Marine Fisheries Service. |
Air, water, and soil are the three natural resources that are vital to life on earth. Most of us easily recognize the importance of air and water in our daily lives. Each time we take a breath or a sip of water the importance of both are apparent, but how often do we think of the soil beneath our feet as vital to our existence?
Soil is the top layer on the earth surface. Soil provides many important functions for plants, animals, and humans. It supports growing plants, trees, crops and millions of organisms. Most of our food depends on soil. Soil filters pollutants from our environment and from our drinking water. It regulates the flow of water through the landscape before it ends up in plant roots, aquifers and rivers. Soil is the foundation of our buildings and roads. It also protects and preserves our history and our past in archeological sites. What is soil? Soil is made up of four main components: mineral grains, organic matter, water, and air. The largest component is mixture of mineral grains from broken down rocks and sediment created by the effects of wind, rainfall, snow, freezing, and thawing. This layer forms slowly over hundreds of years. This mixture gives the soil texture which can be described as sand, loam, or clay. Organic matter forms from rotted and decomposed vegetation broken down by soil organisms. This is a very thin layer but an important one that acts like glue holding all four components together. The amount of water and air in the soil varies and is based on climate, soil texture, and water holding capacity. These four components make up a very complex ecosystem home to millions of livings creatures.
Burrowing mammals are very common in semi-arid and arid landscapes. Their extensive underground pathways and dwellings provide protection from predators and weather extremes. These creatures can be as large as a badger or as small as a shrew. They include ground squirrels, pocket gophers, prairie dogs, kangaroo rats, pocket mice, and many other mammals.
Their digging mixes subsurface materials with surface soils, litter, and feces. This helps fertilize the soil and buries carbon, which benefits many plants and soil microorganisms. Their burrows and tunnels allow water from high intensity storms to rapidly infiltrate into the soil instead of running off. Burrows carry oxygen deep into the soil, helping to aerate the soil around plant roots. Their burrowing activities help transport mycorrhizal and other fungus spores.
Some mammals, such as kangaroo rats and pocket mice, bury seeds in caches that serve as a valuable seed source for plant establishment. The burying of organic matter with the seed provides a supply of nutrients for seedling survival. Small mammals consume and help control soil arthropod populations.
Soil arthropods are invertebrates with segmented bodies and jointed legs. They can be microscopic or quite large. There are lots of different arthropods such as insects, crustaceans, arachnids, and myriapods. They are the largest animal phylum. Arthropods fly, creep, and crawl. They are commonly thought of as bugs.
Arthropods perform many different functions in the soil community. Some are shredders, others predators. Some arthropods eat plants, while others feed strictly on fungus. They aerate the soil, shred organic matter into small pieces and assist other soil organisms in the decomposition process. They help distribute beneficial microbes in the soil. Through consumption, digestion, and excretion of soil organic matter, soil arthropods help improve soil structure and change nutrients into forms available to plants.
They regulate populations of other soil organisms, like protozoa, which help maintain a healthy soil food web and control disease-causing organisms. In turn, soil arthropods are consumed by burrowing mammals, birds, and lizards.
Nematodes are tiny roundworms that are common in soils everywhere. From the freezing Arctic to dry, hot deserts, one cubic foot of soil can contain millions of them. Nematodes can be most easily classified according to their feeding habits. Some graze on bacteria and fungi. Some like plant roots; others prey on tiny animals. Some will eat any of the above mentioned food items.
Nematodes can’t move through the soil unless a film of moisture surrounds the soil particles. Under hot, dry conditions, nematodes can become dormant, allowing them to survive long periods of drought. When water becomes available, they quickly spring back to life.
Among the thousands of species that have been identified, many are considered beneficial because they boost the nutritional status of the soil. Nematodes feed on decaying plant material, along with organisms that assist in the decomposition of organic matter (bacteria and fungi). This helps disperse both the organic matter and the decomposers in the soil. Increased organic matter concentration and decomposition boost nitrogen and phosphorus levels.
Because some nematodes prey on other animals, they can be useful for control of pest insects. Nematodes aren’t all good. Some damage the roots of domestic crops, costing U.S. farmers an estimated $8 billion a year.
Protozoa are tiny single-celled animals that mainly feed on bacteria. They are microscopic and a pinch of soil can contain thousands. All protozoa need water to move through soil, however, they only need a thin film surrounding the soil particles to get around. Protozoa are found in soils everywhere: even in very dry deserts. However, they are most abundant near plant roots because that’s where both bacteria and organic matter are concentrated in the soil.
Protozoa play an important role: they eat bacteria and release nitrogen and other nutrients in their waste. Since protozoa are concentrated near plant roots, the plant can benefit from this supply of nutrients. Protozoa can stimulate the rate of decomposition by maximizing bacterial activity. Protozoa are in turn consumed by nematodes and microarthropods. Not all protozoa are beneficial. Some protozoa attack roots and cause disease in rangeland plants. However, other protozoa feed on root pathogens, thus reducing plant disease.
Bacteria are minuscule one-celled organisms that can only be seen with a powerful light or electron microscope. They can be so numerous that a pinch of soil can contain millions of organisms. Bacteria are tough—they occur everywhere on earth and have even been found over a mile down into the core of the earth. Bacteria are common throughout the soil but tend to be most abundant in or adjacent to plant roots, an important food source.
Bacteria are important in the carbon cycle. They contribute carbon to the system by fixation (photosynthesis) and decomposition. Actinomycetes are particularly effective at breaking down tough substances like cellulose (which makes up the cell walls of plants) and chitin (which makes up the cell walls of fungi) even under harsh conditions, such as high soil pH.
Bacteria are particularly important in nitrogen cycling. Free-living bacteria fix atmospheric nitrogen, adding it to the soil nitrogen pool. Other nitrogen-fixing bacteria form associations with the roots of plants and fix nitrogen, which is then available to both the host and other plants in the near vicinity. Some soil nitrogen is unusable by plants until bacteria converts it to forms that can be easily assimilated.
Some bacteria exude a sticky substance that helps bind soil particles into small aggregates. So despite their small size, they help improve water infiltration, water-holding capacity, soil stability, and aeration.
As long as soil bacteria does not get out of balance, it suppresses root-disease in plants by competing with pathenogenic organisms. Bacteria are becoming increasingly important in bioremediation. Bacteria are capable of filtering and degrading a large variety of human-made pollutants in the soil and groundwater so that they are no longer toxic. The list of materials they can detoxify includes herbicides, heavy metals, and petroleum products.
Mycorrhizal fungi colonize the roots of many plants. Mycorrhizal fungi don’t harm the plant they develop a “symbiotic” relationship that helps the plant be more efficient at obtaining nutrients and water. In return, the plant provides energy to the fungus in the form of sugars.
The fungus is actually a network of filaments that grow in and around the plant root cells, forming a mass that extends considerably beyond the plant’s root system. This essentially extends the plant’s reach to water and nutrients, allowing it to utilize more of the soil’s resources. This makes the plant stronger, especially during drought periods. A stronger individual plant means that the entire community is more resilient to disturbance. Some mycorrhizae may even protect their host plant against unwanted pathogens.
Not all fungi are mycorrhizal. There are also fungi that help decompose the organic matter in litter and soil. However, they play a lesser role than bacteria in this important process in semi-arid and arid soils.
Some plants are “mycorrhizal-obligate,” meaning that they can’t survive to maturity without their fungal associate like the Soaptree Yucca. Mycorrhizae are particularly important in assisting the host plant with the uptake of phosphorus and nitrogen, two nutrients vital to plant growth.
Biological crust is a complex community of living organisms—algae, cyanobacteria, bacteria, lichen, mosses, liverworts, and fungi—that grow on or just below the soil surface. Biological soil crusts are common worldwide in arid and semi-arid shrublands, grasslands, and woodlands.
They are highly variable in appearance. Which organisms dominate the crust is determined by several factors, including soil chemical and physical characteristics and weather patterns. Biological soil crusts are distinguishable from bare soil by a bumpy appearance, forming sort of a mini-landscape on the soil surface complete with hills and valleys. They tend to be dark in color, especially when dry.
Biological soil crusts are known by many names such as Microbiotic , Cryptogamic , Cryptobiotic, Microphytic , and Microfloral but they all refer to the same thing:
Biological soil crusts stabilize the soil. Some of the organisms secrete sticky substances (polysaccharides), which hold soil particles together. The crusts make the soil more fertile. Most of the organisms associated with the biological soil crust are photosynthetic, particularly during cold, wet seasons when most plants are dormant. This means that the biological soil crust increases the length of the time during which organic carbon is added to topsoil. Biological soil crusts can make other nutrients more available for use by grasses, forbs, and shrubs, as nutrients adhere to the sticky substances, and are prevented from leaching. Biological soil crusts may help the soil to retain more moisture depending on both the composition of the crust and soil characteristics.
Cyanobacteria once known as blue green algae, makes up the largest part of the biological soil here at White Sands. Cyanobacteria combined with fungi form a bumpy lichen crust that protects the surface of the interdunes from erosion. This crust enables plants to grow in nutrient poor sand by building up soil layers and by stabilizing the ever shifting sand.
Cyanobacteria takes nitrogen from the air and make it available for plants in the interdune area. They also provide firm footing for plants to take hold and root. It also feeds a host of tiny mites that live in the soil and in the leaves of plants and plant litter that accumulate under shrubs. Other mites graze on the fungi and transport the fungi spores to places where they will grow.
Cyanobacteria can be found in almost every terrestrial and aquatic habitat. It is the oldest know fossil more than 3.5 billion years old and is the largest and most important group of bacteria on Earth.
Learn more: Soil Food Web
Last updated: September 24, 2016 |
Spread of Angles (red) and Saxons (yellow) around 500 AD
|Regions with significant populations|
|Schleswig, Holstein, Jutland, Frisia, Heptarchy (England)|
|Old English |
|Originally Germanic and Anglo-Saxon paganism, later Christianity|
|Related ethnic groups|
|Anglo-Saxons, Saxons, Frisii, Jutes|
The Angles (Old English: Ængle, Engle; Latin: Angli; German: Angeln) were one of the main Germanic peoples who settled in Great Britain in the post-Roman period. They founded a number of kingdoms of Anglo-Saxon England, and their name is the root of the name England ("land of Ængle"). According to Tacitus, before their move to Britain, Angles lived alongside Langobardi and Semnones in historical regions of Schleswig and Holstein, which are today part of northern Germany (Schleswig-Holstein).
The name of the Angles may have been first recorded in Latinised form, as Anglii, in the Germania of Tacitus. It is thought to derive from the name of the area they originally inhabited, the Anglia Peninsula (Angeln in modern German, Angel in Danish). This name has been hypothesised to originate from the Germanic root for "narrow" (compare German and Dutch eng = "narrow"), meaning "the Narrow [Water]", i.e., the Schlei estuary; the root would be *h₂enǵʰ, "tight". Another theory is that the name meant "hook" (as in angling for fish), in reference to the shape of the peninsula; Indo-European linguist Julius Pokorny derives it from Proto-Indo-European *h₂enk-, "bend" (see ankle).
During the fifth century, all Germanic tribes who invaded Britain were referred to as either Englisc, Ængle or Engle, who were all speakers of Old English (which was known as Englisc, Ænglisc, or Anglisc). Englisc and its descendant, English, also goes back to Proto-Indo-European *h₂enǵʰ-, meaning narrow. In any case, the Angles may have been called such because they were a fishing people or were originally descended from such, so England would mean "land of the fishermen", and English would be "the fishermen's language".
Gregory the Great, in an epistle, simplified the Latinised name Anglii to Angli, the latter form developing into the preferred form of the word. The country remained Anglia in Latin. Alfred the Great's translation of Orosius's history of the world uses Angelcynn (-kin) to describe the English people; Bede used Angelfolc (-folk); also such forms as Engel, Englan (the people), Englaland, and Englisc occur, all showing i-mutation.
The earliest known mention of the Angles may be in chapter 40 of Tacitus's Germania written around AD 98. Tacitus describes the "Anglii" as one of the more remote Suebic tribes compared to the Semnones and Langobardi, who lived on the Elbe and were better known to the Romans. He grouped the Angles with several other tribes in that region, the Reudigni, Aviones, Varini, Eudoses, Suarini, and Nuitones. These were all living behind ramparts of rivers and woods, and therefore inaccessible to attack.
He gives no precise indication of their geographical situation, but states that, together with the six other tribes, they worshiped Nerthus, or Mother Earth, whose sanctuary was located on "an island in the Ocean". The Eudoses are the Jutes; these names probably refer to localities in Jutland or on the Baltic coast. The coast contains sufficient estuaries, inlets, rivers, islands, swamps, and marshes to have been then inaccessible to those not familiar with the terrain, such as the Romans, who considered it unknown, inaccessible, with a small population and of little economic interest.
The majority of scholars believe that the Anglii lived on the coasts of the Baltic Sea, probably in the southern part of the Jutish peninsula. This view is based partly on Old English and Danish traditions regarding persons and events of the fourth century, and partly because striking affinities to the cult of Nerthus as described by Tacitus are to be found in pre-Christian Scandinavian religion.
Ptolemy, writing in around 150 AD, in his atlas Geography (2.10), describes the Sueboi Angeilloi, Latinised to Suevi Angili, further south, living in a stretch of land between the northern Rhine and central Elbe, but apparently not touching either river, with the Suebic Langobardi on the Rhine to their west, and the Suebic Semnones on the Elbe stretching to their east.
These Suevi Angili would have been in Lower Saxony or near it, but they are not coastal. The three Suebic peoples are separated from the coastal Chauci (between the Ems and the Elbe), and Saxones (east of the Elbe mouth), by a series of tribes including, between the Weser and Elbe, the Angrivarii, "Laccobardi" (probably another reference to the Langobardi, but taken by Ptolemy from another source), and the Dulgubnii. South of the Saxons, and east of the Elbe, Ptolemy lists the "Ouirounoi" (Latinised as Viruni, and probably the Varini) and Teutonoari, which either denotes "the Teuton men", or else it denotes people living in the area where the Teutons had previously lived (whom Ptolemy attests as still living to the east of the Teutonoari). Ptolemy describes the coast to the east of the Saxons as inhabited by the Farodini, a name not known from any other sources.
Owing to the uncertainty of this passage, much speculation existed regarding the original home of the Anglii. One theory is that they or part of them dwelt or moved among other coastal people, perhaps confederated up to the basin of the Saale (in the neighbourhood of the ancient canton of Engilin) on the Unstrut valleys below the Kyffhäuserkreis, from which region the Lex Anglorum et Werinorum hoc est Thuringorum is believed by many to have come. The ethnic names of Frisians and Warines are also attested in these Saxon districts.
A second possible solution is that these Angles of Ptolemy are not those of Schleswig at all. According to Julius Pokorny, the Angri- in Angrivarii, the -angr in Hardanger and the Angl- in Anglii all come from the same root meaning "bend", but in different senses. In other words, the similarity of the names is strictly coincidental and does not reflect any ethnic unity beyond Germanic.
However, Gudmund Schütte, in his analysis of Ptolemy, believes that the Angles have simply been moved by an error coming from Ptolemy's use of imperfect sources. He points out that Angles are placed correctly just to the northeast of the Langobardi, but that these have been duplicated, so that they appear once, correctly, on the lower Elbe, and a second time, incorrectly, at the northern Rhine.
Bede states that the Anglii, before coming to Great Britain, dwelt in a land called Angulus, "which lies between the province of the Jutes and the Saxons, and remains unpopulated to this day." Similar evidence is given by the Historia Brittonum. King Alfred the Great and the chronicler Æthelweard identified this place with Anglia, in the province of Schleswig (Slesvig) (though it may then have been of greater extent), and this identification agrees with the indications given by Bede.
In the Norwegian seafarer Ohthere of Hålogaland's account of a two-day voyage from the Oslo fjord to Schleswig, he reported the lands on his starboard bow, and Alfred appended the note "on these islands dwelt the Engle before they came hither".[n 1] Confirmation is afforded by English and Danish traditions relating to two kings named Wermund and Offa of Angel, from whom the Mercian royal family claimed descent and whose exploits are connected with Anglia, Schleswig, and Rendsburg. Danish tradition has preserved record of two governors of Schleswig, father and son, in their service, Frowinus (Freawine) and Wigo (Wig), from whom the royal family of Wessex claimed descent. During the fifth century, the Anglii invaded Great Britain, after which time their name does not recur on the continent except in the title of the legal code issued to the Thuringians: Lex Anglorum et Werinorum hoc est Thuringorum.
The Angles are the subject of a legend about Pope Gregory I, who happened to see a group of Angle children from Deira for sale as slaves in the Roman market. As the story would later be told by the Anglo-Saxon monk and historian Bede, Gregory was struck by the unusual appearance of the slaves and asked about their background. When told they were called "Anglii" (Angles), he replied with a Latin pun that translates well into English: “Bene, nam et angelicam habent faciem, et tales angelorum in caelis decet esse coheredes” ("It is well, for they have an angelic face, and such people ought to be co-heirs of the angels in heaven"). Supposedly, this encounter inspired the pope to launch a mission to bring Christianity to their countrymen.
The province of Schleswig has proved rich in prehistoric antiquities that date apparently from the fourth and fifth centuries. A large cremation cemetery has been found at Borgstedt, between Rendsburg and Eckernförde, and it has yielded many urns and brooches closely resembling those found in pagan graves in England. Of still greater importance are the great deposits at Thorsberg moor (in Anglia) and Nydam, which contained large quantities of arms, ornaments, articles of clothing, agricultural implements, etc., and in Nydam, even ships. By the help of these discoveries, Angle culture in the age preceding the invasion of Britannia can be pieced together.
Anglian kingdoms in England
According to sources such as the History of Bede, after the invasion of Britannia, the Angles split up and founded the kingdoms of Northumbria, East Anglia, and Mercia. H.R. Loyn has observed in this context that "a sea voyage is perilous to tribal institutions", and the apparently tribe-based kingdoms were formed in England. Early times had two northern kingdoms (Bernicia and Deira) and two midland ones (Middle Anglia and Mercia), which had by the seventh century resolved themselves into two Angle kingdoms, viz., Northumbria and Mercia. Northumbria held suzerainty amidst the Teutonic presence in the British Isles in the seventh century, but was eclipsed by the rise of Mercia in the eighth century. Both kingdoms fell in the great assaults of the Danish Viking armies in the 9th century. Their royal houses were effectively destroyed in the fighting, and their Angle populations came under the Danelaw. Further south, the Saxon kings of Wessex withstood the Danish assaults. Then in the late 9th and early 10th centuries, the kings of Wessex defeated the Danes and liberated the Angles from the Danelaw. They united their house in marriage with the surviving Angle royalty, and were accepted by the Angles as their kings. This marked the passing of the old Anglo-Saxon world and the dawn of the "English" as a new people. The regions of East Anglia and Northumbria are still known by their original titles. Northumbria once stretched as far north as what is now southeast Scotland, including Edinburgh, and as far south as the Humber Estuary.
The rest of that people stayed at the centre of the Angle homeland in the northeastern portion of the modern German Bundesland of Schleswig-Holstein, on the Jutland Peninsula. There, a small peninsular area is still called Anglia today and is formed as a triangle drawn roughly from modern Flensburg on the Flensburger Fjord to the City of Schleswig and then to Maasholm, on the Schlei inlet.
Tacito, De origine et situ Germanorum— XL, 1
- Pyles, Thomas and John Algeo 1993. Origins and development of the English language. 4th edition. (New York: Harcourt, Brace, Jovanovich).
- Barber, Charles, Joan C. Beal and Philip A. Shaw 2009. Other Indo-European languages have derivities of the PIE Sten or Lepto or Dol-ə'kho as root words for narrow. The English language. A historical introduction. Second edition of Barber (1993). Cambridge: University Press.
- Baugh, Albert C. and Thomas Cable 1993 A history of the English language. 4th edition. (Englewood Cliffs: Prentice Hall).
- Gregory said Non Angli, sed angeli, si forent Christiani "They are not Angles, but angels, if they were Christian" after a response to his query regarding the identity of a group of fair-haired Angles, slave children whom he had observed in the marketplace. See p. 117 of Zuckermann, Ghil'ad (2003), Language Contact and Lexical Enrichment in Israeli Hebrew. Palgrave Macmillan. ISBN 9781403917232 / ISBN 9781403938695
- Fennell, Barbara 1998. A history of English. A sociolinguistic approach. Oxford: Blackwell.
- Tacitus & 98, Cap. XL.
- Church (1868), Ch. XL.
- Chadwick 1911, pp. 18–19.
- "Lex Anglorum et Werinorum hoc est Thuringorum - Wikisource". la.wikisource.org (in Latin). Retrieved 6 September 2017.
- Schütte (1917), p. 34 & 118.
- Sweet (1883), p. 19.
- Loyn (1991), p. 24.
- Bede (731), Lib. II.
- Jane (1903), Vol. II.
- Loyn (1991), p. 25.
- Beda (731), Historia ecclesiastica gentis Anglorum [The Ecclesiastical History of the English People]. (in Latin)
- Bede (1907) [Reprinting Jane's 1903 translation for J.M. Dent & Co.'s 1903 The Ecclesiastical History of the English Nation], Bede's Ecclesiastical History of England: A Revised Translation, London: George Bell & Sons.
- Cornelius Tacitus, Publius, De origine et situ Germanorum [On the Origin & Situation of the Germans]. (in Latin)
- Cornelius Tacitus, Publius (1942) [First published in 1928, reprinting Church and Brodribb's translations for Macmillan & Co.'s 1868 The Agricola and Germany of Tacitus], , in Hadas, Moses; Cerrato, Lisa (eds.), The Complete Works of Tacitus, New York: Random House.
- Schütte, Gudmund (1917), Ptolemy's Maps of Northern Europe: A Reconstruction of the Prototypes, Copenhagen: Græbe for H. Hagerup for the Royal Danish Geographical Society
- Sweet, Henry (1883), King Alfred's Orosius, Oxford: E. Pickard Hall & J.H. Stacy for N. Trübner & Co. for the Early English Text Society
- Loyn, Henry Royston (1991), A Social and Economic History of England: Anglo-Saxon England and the Norman Conquest, 2nd ed., London: Longman Group, ISBN 978-0582072978 |
A SOURCE of methane gas has been identified by scientists probing the atmosphere of Mars, showing for the first time a possible location for life on the red planet.
The gas, thought to be produced by underground colonies of microbes, has been detected at high levels in Meridiani Planum, a low-lying region near the equator thought to have been covered once by an ocean.
Scientists say the only other possible source of the gas would be volcanoes but, while this has not been ruled out, most evidence suggests there has been no volcanic activity on Mars for millions of years.
Jim Garvin, head of Mars exploration at the National Aeronautics and Space Administration (Nasa), said the results were “very impressive”. He added: “This has sharply raised the chances of finding life on Mars. It is too early to be sure but the most likely source of this gas is life.”
He made the announcement of the find last week at an international astronomy conference in Iceland attended by more than 200 scientists.
The significance of methane lies in the fact that it is quickly destroyed by radiation from the Sun. This means it could exist on Mars only if there was an active and constant source to replenish what was being lost.
The only likely place for life on Mars is underground, where bacteria could be protected from solar radiation. On Earth, methane is produced by volcanic activity but the main sources are biological, especially bacteria that break down organic matter.
The discovery is the latest of several indicators suggesting that Mars may have — or have had — the ability to support life. In March, Nasa’s Opportunity rover confirmed earlier evidence that oceans may have covered much of Mars.
This was also supported by the European Space Agency’s Mars Express, from which Britain’s ill-fated Beagle 2 rover was launched. Such evidence suggests that water, probably in the form of ice, is still hidden underground all over the planet.
Meridiani Planum is thought to be one of the most likely sites for life to have evolved because of its wet past and because it may also have large reserves of underground ice. Nasa’s orbiting Odyssey spacecraft has sent back spectacular pictures from the surface of Mars showing channels and scouring probably caused by water flows.
Nasa is already exploring the Meridiani area with Opportunity, but the rover has no instruments capable of measuring methane levels.
The emissions were measured from Earth using telescopes equipped with an infrared spectrometer which can analyse light reflected from Mars and pick out the unique signature of methane molecules. Nasa has found methane in all parts of the Martian atmosphere but with particularly high levels around Meridiani.
Professor Michael Mumma, who heads Nasa’s astrobiology research programme at Goddard Space Flight Center in Maryland, oversaw the project. He said that frequent strong winds on Mars meant gases in its atmosphere tended to be swiftly mixed together.
“It means that if we find relatively high concentrations, then there must be a local source,” he said.
“The levels we are finding around Meridiani are powerful evidence that there is something emitting it at a steady rate. There is no evidence of vulcanism or tectonic plate movements, so it does favour the conclusion that this methane is of biological origin.”
The location of the discovery fits in with theories about Martian evolution, which suggest that the red planet may once have been lush and green and that its history has been marked by huge climatic swings.
As the climate enters a warmer phase, the underground ice may melt and emerge. Meridiani Planum is thought to have once been covered by a sea hundreds of metres deep.
Methane has also been detected on Mars by other researchers. Earlier this year scientists at the European Space Agency announced that its Mars Express probe had spotted the telltale signature of methane in the atmosphere.
Now both agencies plan to return to Mars with spacecraft able to drill into the soil and analyse the atmosphere directly. Mumma said: “The clear implication is that these emissions are a sign of life on Mars but we need to go there again to find out for sure.”
Researchers think it highly likely that if evidence of bacteria is found in one place then they must exist elsewhere, too, so the scientists are planning to survey the whole planet. oNasa spacecraft may have carried bacteria to the Moon, Mars and across the solar system, a senior official with the organisation has admitted.
The lunar Surveyor spacecraft, which landed on the Moon in 1967, was examined by astronauts from the subsequent Apollo 12 mission in 1969. The samples they brought back were contaminated with streptococcus bacteria, said John Rummel, Nasa’s planetary protection officer.
At the Iceland conference last week he also confirmed that the Nasa craft now surveying Mars, including the rovers, had not been sterilised and were carrying microbes. |
Glossary of Terms
Most frequently used terms
Collaborative Action Research (CAR) uses research to critically examine current arrangements, make changes based on evidence and monitor the impact of those changes.
At its core, CAR should involve two or more organisations working together, to share ideas and perspectives, for the accomplishment of a shared goal.
In education, CAR can help to improve student learning and individual and wider professional practice, and combat professional isolation.
It has three steps:
- Defining the problem (what are we trying to accomplish?)
- Detailing the anticipated outcome (how will we know that a change is an improvement?)
- Developing sustainable models of change (what change can we then make that will result in improvement?)
The Scottish Attainment Challenge is about achieving equity in education, ensuring every child has the same opportunity to succeed.
It focuses on improvement activity in literacy, numeracy and health and wellbeing in specific areas of Scotland:
- West Dunbartonshire
- North Ayrshire
- North Lanarkshire
- East Ayrshire
A full explanation is available on the Scottish Government website.
The Pupil Equity Fund (PEF) is aimed at closing the poverty-related attainment gap.
It is given directly to schools and is spent at the discretion of the head teacher working in partnership with their local authority. Some 95 percent of schools in Scotland have been given funding for pupils in Primary 1 to S3 known to be eligible for free school meals.
The RICs are ‘virtual’ bodies, formed with the aim of improving education and closing the poverty-related attainment gap in the schools in their areas.
They work together to give advice and support to schools, and to share examples of good work across local authority borders. There are six RICs in Scotland:
- Forth Valley and West Lothian Collaborative
- Northern Alliance
- South East Improvement Collaborative (
- South West Collaborative
- Tayside Regional Improvement
- West Partnership.
Those living in poverty often face the greatest challenge to reach their full potential.
Thanks to the Child Poverty (Scotland) Act 2017, Scotland has statutory targets to reduce the number of children experiencing the damaging effects of poverty by 2030. |
It was June 15, 1669 and a French explorer by the name of Louis Jolliet would become the first documented European to find and explore Lake Erie.
Long before the summer Louis Jolliet found the lake, it was inhabited by a few small Native American tribes that lived off various parts of the the lake. The Erie tribe and the Attawandaron tribe were two of the peaceful or “neutral” tribes in the area. The Europeans were very familiar with both tribes and knew they refused to practice violence. Quite some time later, the lake was given its name, “Lake Erie” after the Erie tribe. “Erie” originally comes from the shortened version of “erielhonan”, an Iroquoian word that means “long tail”.
Though Jolliet was the first known European to explorer Lake Erie, his efforts were limited and many others would come to explore the region over the years.
After Jolliet’s explorations on Lake Erie, he headed south for his next adventure to explore the Mississippi river. Over the years he continued to explore North America and eventually spent his later years further north in Canada. |
The United States Geological Survey (USGS) has released a ten-year science strategy, “USGS Science in the Decade 2007-2017.” As noted in the executive summary, “This report is the first comprehensive science strategy since the early 1990s to examine major USGS science goals and priorities.” Six tactical guidelines were given in the plan, including the following:
1) Understanding Ecosystems and Predicting Ecosystem Change: Ensuring the Nation’s Economic and Environmental Future;
2) Climate Variability and Change: Clarifying the Record and Assessing Consequences;
3) Energy and Minerals for America’s Future: Providing a Scientific Foundation for Resource Security, Environmental Health, Economic Vitality, and Land Management;
4) A National Hazards, Risk, and Resilience Assessment Program: Ensuring the Long-Term Health and Wealth of the Nation;
5) The Role of Environment and Wildlife in Human Health: A System that Identifies Environmental Risk to Public Health in America; and,
6) A Water Census of the United States: Quantifying, Forecasting, and Securing Freshwater for America’s Future.
The USGS plays a pivotal role in safeguarding the public against natural hazards and providing the science needed to manage water, biological, energy, and mineral resources and defend public health from contamination, pollution, and emerging diseases. With the potential for increased natural hazards due to climate change, it is fundamental that the USGS be adaptive in their approaches to providing scientific data for policy makers, resource managers, and the public at large. The USGS Science in the Decade 2007–2017 is available at http://pubs.usgs.gov/circ/2007/1309/.
back to Public Policy Reports |
Thinking about cancer or dealing with cancer risk can be scary or overwhelming, but we believe that receiving information and resources is comforting, empowering, and lifesaving.
Today’s oncology arsenal utilizes both preventive and treatment vaccines. Cancer prevention vaccines target viruses that are known to cause cancer. Some forms of the human papilloma virus (HPV), for example, have been linked to cervical and other cancers. HPV vaccines help to prevent cervical and other cancers associated with HPV infection.
No vaccines are currently approved to prevent hereditary cancers. One of the challenges involved is that the cancer develops from a person’s own cells, so the vaccine must be able to recognize the difference between a healthy cell and one that is evolving into a cancer cell. This is very different from the way other traditional vaccines that target viruses work. Because this research is still in very early stages it will likely be many years before vaccines to prevent hereditary cancers are available.
Cancer treatment vaccines are molecules that are introduced into the body to start an immune response against cancer cells; they are different from vaccines that work as prevention against viruses such as chicken pox. Instead of preventing disease, cancer treatment vaccines encourage the immune system to attack an existing disease. These vaccines are sometimes made from a patient’s own tumor cells. Other types of vaccines target substances substances produced by tumor cells, to try to prompt the immune system to mount an attack against cancer cells in the body. One example is Sipuleucel-T, a cancer treatment vaccine that is used to treat some men with metastatic prostate cancer. Other cancer treatment vaccines are being tested in clinical trials to treat a range of cancers, including breast cancer.
Cancer vaccine side effects include pain, swelling, redness, bruising or itching at the site of injection. Rare side effects include fever, headache, dizziness, fatigue, nausea, vomiting and diarrhea, sleep problems, runny or stuffy nose, sore throat, cough, tooth pain, or joint or muscle pain. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.