content
stringlengths 275
370k
|
---|
Leonardo’s anatomical studies were among the most significant achievements of Renaissance science. However, during his lifetime, Leonardo’s medical investigations remained private. He did not consider himself a professional in the field of anatomy, and he neither taught nor published his findings.
Dr Abe V Rotor
Leonardo made many important discoveries. For instance, he produced the first accurate depiction of the human spine, while his notes documenting his dissection of the Florentine centenarian contain the earliest known description of cirrhosis of the liver. Had he published his treatise, he would be considered more important than the Belgian anatomist Andreas Vesalius, whose influential textbook On the Fabric of the Human Body appeared in 1543. But he never did.
Leonardo discovered several extraordinary things about the heart. Up until and after his time, the heart was believed to be a two-chambered structure. Leonardo drew the heart with four chambers. Moreover, he discovered that the atria or filling chambers contract together while the pumping chambers or ventricles are relaxing, and vice versa. He observed the heart’s rotational movement. The heart is a complex cone that empties itself with a twisting motion – it wrings itself out, like the wringing out of a towel. In heart failure it loses this twist, which Leonardo may have understood.
Not only as an artist and anatomist, Leonardo experimented as a scientist comparable to modern technique today. Perhaps most impressive of all, were Leonardo’s observations about the aortic valve, which he made while experimenting with an ox’s heart.
This is how Alastair Sooke* describes Leonardo's finding.
"Intrigued by the way the aortic valve opens and closes to ensure blood flows in one direction, Leonardo set about constructing a model by filling a bovine heart with wax. Once the wax had hardened, he recreated the structure in glass, and then pumped a mixture of grass seeds suspended in water through it. This allowed him to observe little vortices as the seeds swirled around in the widening at the root of the aorta. As a result, Leonardo correctly posited that these vortices helped to close the aortic valve. Yet because he never published his far-sighted research, this remained unknown for centuries."
Alastair Sooke looks through the ultimate Renaissance man’s anatomical sketchbooks – scientific masterpieces full of lucid insights into the functioning of the human body.
Further than the Renaissance, in 2700 BC China, people back then rightly recognized goiters as a problem, but there was no mention of thyroid glands back then. Goiters are enlarged thyroid glands but didn't understand the source. Da Vinci had created the very first depiction of the normal thyroid, and in so doing, recognized it as an anatomical organ and not simply a pathological aberration.
Leonardo's study of the human skeleton, and a dissection of the human fetus.
Another contribution of Leonardo is the development of artificial limbs and synthetic organs. His studies on how limbs and organs work have influenced scientists today to create “replacements” of body parts in order for people to function normally..
To think that contact lens is a recent invention, is not accurate, Leonardo was the first to think up the idea of the contact lens - he invented it himself, The model was drawn in his notebook in 1508 which led to the invention of the contact lens in 1808. Today, millions of people use contact lenses, as they are convenient in certain activities like sports, fashion, and studies.
Leonardo was the first to describe atherosclerosis and hepatic cirrhosis. He used molten wax to define the anatomical cerebral ventricles, and made a model glass aorta to study the flow of blood across the aortic valve, using water containing grass seeds to observe patterns of flow. He described the coronary sinuses almost 200 years before Valsalva gave them his name, and, 120 years before Harvey, was surely only a heartbeat away from grasping the idea of the circulation of the blood.
Leonardo compiled a series of 18 mostly double-sided sheets exploding with more than 240 individual drawings and over 13,000 words of notes. He dissected some 30 corpses for his anatomical drawings and studies.
References and Readings
Medical Impact - Leonardo Da Vinci
Leonardo da Vinci's groundbreaking anatomical sketches - BBC
History of Medicine: Leonardo Da Vinci and the Elusive Thyroid ...
Leonardo da Vinci: anatomist - NCBI - National Institutes of Health
https://www.ncbi.nlm.nih.gov › NCBI › Literature › PubMed Central (PMC)
by R Jones - 2012 |
Construct an equilateral triangle. Then construct one of its altitudes. If the sides each have length 1, how long is the altitude?
1. Write a proof. AB*BE = CB*BD. Prove triangle ABC is congruent to triangle DBE. 2. A parent group wants to double the area of a playground. The measurements of the playground are width is 2W and the length is 2L. They ask you to comment. What would you say? 3. Find the length of the altitude drawn to the hypotenuse.
Prove that an interior angle bisector of any triangle divides the side of the triangle opposite the angle into segments proportional to the adjacent sides.
Solve the triangle in which a = 36, b=24, B= 25 Degrees (This is an ambiguous case).
Please solve. See the attached file for diagrams. Question 1 The preferred seating area at the Music Theatre is the shape of a parallelogram. Its base is 34 yd and its height is 39.6 yd. Find the area. Question 2 The diagonal of a small pasture measures square root 12,617 feet in length. Find the le
Write a brief description of Lobachevsky's geometry.
Solve. A triangle has three angles, R, S, and T. Angle T is 40° greater than angle S. Angle R is 8 times angle S. What is the measure of each angle?
3 congruent circles with radius 5 are tangent to each other. The circles are enclosed in an equilateral triangle. What it the perimeter of the triangle?
A wire length L cm, is cut into 2 parts. One piece forms a rectangle whose length is twice its width and the other piece forms an equilateral triangle. How should the wire be cut so that the total area is a A) maximum B) minimum
A) Verify the following Identities : i) [(Sin 2theta)/ (sin theta )- ( cos 2theta/ cos theta )] = sec theta ii) cos2x = (cot^2 x-1 )/ (cot^2 x-1 ) And, ììì) Use logarithms and the law of tangents to solve the triangle ABC, given that a= 21.46 ft, b= 46.28 ft, and C = 32° 28' 30
A) Find the area of the isosceles triangle in which each of the equal sides is 14.72 in and the vertex angle is 47° 28' . B) Find the radius of the inscribed circle and the radius of the circumscribed circle for the following obliqe triangle : a) = 12.7 , b = 21.5, and c = 28.6
Using LOGARITHMS, find the area of the following triangles : a) a = 12.7, b = 21.5, and c = 28.6 b) c = 426, A = 45° 48' 36 " , and B = 61° 2' 13 ".
Euclidean Geometry (II) Computing the Volume of a regular tetrahedron of edge length 1 Explain how to compute the volume of a regular tetrahedron of edge length 1.
Computing the Volume of a regular tetrahedron of edge length 2(alpha). Explain how to compute the volume of a regular tetrahedron of edge length 2(alpha).
If there are n points on the circumference of a "cake" and each pair of these points is joined by a line or "cut" Let Wn be the number of regions. In the attached picture W4 = 8 (points ABCD). If we add point E and join it to every other point. Consider how many regions the segment AED is split into and then triangles ABC and AB
I need help writing a proof of the formulas of the volumes of two regular polyhedra (Platonic solids): (1) a tetrahedron and (2) an octahedron. I then have to use those formulas to find the volumes given a side length of 1.
Solve the triangle, round lengths to the nearest tenth and angle measures to the nearest degree. Please show work and see attachment.
Please see attachment and show work.
See the attached file. Solve the triangle. Round lengths to the nearest tenth and degrees. 1. B = 43, C = 107, B = 14 2. C = 110, a = 5, b = 11 3. B = 54, C = 112, b = 35 Find the solution 2 cos(theta) + 1 = 0.
Determine graphically the vertices of the triangle , the equations of whose sides are y=x; y=0; 2x+3y=10
One is the square root of 5-3, One is the square root of 5+3, One is 4
Please see the attached file for full question.
See attached file for full problem description.
Draw a diagram of a Saccheri quadrilateral ABDC, where (a) A and B are a pair of consecutive vertices (b) sides AD and BC are a pair of opposite sides (c) angles A and B are right angles (d) sides AD and BC are congruent. Then let M be the midpoint of AB, and drop a perpendicular from M to CD with foot N. Once
Please see the attached file for the fully formatted problems.
The hypotenuse of a right triangle is 12 and the area is 16√5. Find the length of the legs of the triangle.
Surveying: Surveyors sometimes use similar triangles to measure inaccessible distances. A surveyor could find distance AB by setting up similar triangles ABC and EDC. Assuming all lengths may be directly measured to set up a proportion and solve for AB. 17. How does the surveyor make ABC similar to EDC? 18. Set up a prop
6. What is the volume of a regular square pyramid which has a total area of 360 if the square base is 10 on a side. 9. How long is the edge of a cube whose total area is numerically equal to it's volume? 15. A cube has a cylinder inscribed inside of it. That cylinder has a sphere inscribed inside of it. What is the rati
Geometry has many practical applications in everyday life. Estimating heights of objects, finding distances, and calculating areas and volumes are commonplace. One of the most fundamental theorems in geometry, the Pythagorean Theorem, allows us to make many of these calculations. The Pythagorean Theorem states that the square of
Given ABC with a=38 degrees, b=25, and c=32, find the length of side a. Given ABC with a=4, b=5 and c=7, find the approximate size of the angle y |
What is the Phonics Screening Check?
- Children in Year 1 throughout the country will all be taking part
- phonics screening check during the same week in June
- Children in Year 2 will also take the check if they did not achieve the required result when in Year 1 or they have not taken the test before.
- The phonics screening check is designed to confirm whether individual children have learnt sufficient phonic decoding and blending skills to an appropriate standard.
What Happens During the Test?
- The test contains 40 words
- Each child will sit one to one and read each word aloud to a teacher
- The test will take approximately 10 minutes per child
- The list of words the children read is a combination of 20 real words and 20 pseudo words (nonsense words).
Pseudo Words (Nonsense words)
The pseudo words will be shown to your child with a picture of an alien. This provides the children with a context for the pseudo word which is independent from any existing vocabulary they may have. Pseudo words are included because they will be new to all pupils; they do not favour children with a good vocabulary knowledge or visual memory of words. |
This is the second installment in my look at one enigmatic feature on Mars as seen by all its orbiters through the more than thirty years of spacecraft observations. The feature called "White Rock" was first spotted in Mariner 9 images in 1972, a strange, relatively light-toned splotch of something sitting within a crater named Pollack near Mars' equator. Mariner 9 seems to me to belong to the "prehistory" of Mars exploration, and I don't think it's just because Mariner 9 happened before I was born. Mariner 9 performed a useful survey of Mars, but its image data was surpassed and supplanted by the two Viking orbiters, which operated at Mars from 1976 to 1980.
Even though the Viking data is now three decades old, it is still relevant and widely used by Mars scientists. The lasting accomplishment of the Viking orbiter missions was to produce globally complete data sets that were carefully assembled into digital maps of the entire globe of Mars. The Viking mosaics provided base maps and context images for just about every subsequent study of the planet; it's only very recently that better-quality global image maps have come available, from the wide-angle camera on Mars Global Surveyor MOC and Mars Odyssey THEMIS. (You can download that data here, or wander around it using Google Mars. To wander around the Viking global maps, you can visit the PDS Map-a-Planet website.) Even with the advent of the MOC and THEMIS maps, Viking is still the only approximately true color global data set that is readily accessible.
There were two main Viking mosaics, a black-and-white map produced at 256 pixels per degree (which translates to a resolution of 231 meters per pixel at the equator) and a color map produced at 64 pixels per degree (which translates to 925 meters per pixel at the equator). Here's what Pollack crater, and White Rock, look like in the Viking global mosaics:
NASA / JPL-Caltech
'White Rock' from Viking global mosaic
These two views of the bright feature colloquially referred to as "White Rock" are cropped from global image maps created from Viking Orbiter data captured by the two spacecraft from 1976 to 1978. Both images cover areas two degrees square, the grayscale image at a resolution of 256 pixels per degree and the color image at a resolution of 64 pixels per degree.
Valuable as those global data sets are, they short-change the capability of the Viking orbiters. There are many areas of Mars that Viking imaged with resolutions of 25 meters per pixel or even higher, nearly ten times better than the highest resolution offered by the global mosaic. Using the wonderful map interface hosted by Arizona State University, I dug up the four images required to make this mosaic of White Rock, captured through a red filter with a resolution of roughly 30 meters per pixel.
NASA / JPL-Caltech / Emily Lakdawalla
High-resolution view of White Rock from Viking
Viking Orbiter 1 captured the four images that compose this view of White Rock, Pollack Crater, on September 21, 1978.
NASA / JPL-Caltech / Space Science Institute
Complex bright-dark boundary on Titan
This Cassini view of Titan from October 26, 2005 shows an area about 500 kilometers across. There are linear features oriented in an east-west direction (from top left to bottom right). Scientists speculate that the orientation of the features could be wind streaks.
This high-resolution Viking image reveals much more subtle detail in the tonal variations in the crater floor surrounding White Rock than is visible in the Mariner 9 image. However, I can't say that the Viking images really improve much on Mariner 9 to illuminate what the heck White Rock is. One detail you can discern from the Viking images is that windblown deposits probably play some role in the appearance of the feature, because there appear to be windswept tails trailing off to the southeast of the little bits of bright material, especially near the top of White Rock. These remind me a lot of the similarly vague windswept features trailing from bright spots visible in Cassini images of the boundary between bright and dark areas on the surface of Titan (an example of which is at right).
There is one valuable lesson to be learned from the Viking images of White Rock which is best illuminated by the tiny little low-resolution color view in the mosaic above. The lesson: White Rock really isn't white, which is to say, it's not exactly that bright. It looks really bright because it is totally surrounded by unusually dark material that seems to fill the southern half of the floor of Pollack crater. But its apparent brightness is actually an optical illusion; if you compare the pixel values in the "White Rock" area with the pixel values in the crater floor in the northern part of Pollack, you'll see that they're pretty much the same; White Rock is no more white than the rest of Mars. This is important, because some early attempts to explain White Rock involved unusual deposits of very light-colored minerals like carbonates or salts. You don't need to invoke such unusual stuff if "White Rock" is not different in color from other nearby areas of Mars.
Here's a parting view of White Rock from Viking. Most of the Viking Orbiter views were captured pointing straight down at Mars, giving the bird's-eye view that is best for making global maps. However, there were some shots where Viking pointed significantly off-nadir. These give pretty perspective views across the surface. Here, Viking was looking toward the dawn terminator (off the image at left), so we see White Rock not long after sunrise. This image is pretty but there's also another valuable lesson contained within it. Notice how most of the craters in the image are pretty dramatically lit; the low-angle early morning sunlight beams down on their western walls, while their eastern walls are darkly shadowed. Now look at the White Rock feature itself. There is a tiny hint of shadow, but not much. That tells you that whatever topography White Rock has, it is much less dramatic than the topography found in the walls of the crater; it's a pretty flat deposit, or outcropping, or whatever it is.
NASA / JPL-Caltech
White Rock from Viking: off-nadir view
Viking Orbiter 1 took the two images that compose this mosaic of the region around "White Rock" and Pollack crater on February 25, 1978. Viking looked off-nadir to capture this view, toward its west, with the Sun lighting the scene dramatically from the east.
The low angle of the sunlight also brings much subtler topographic features into relief. All over the place, you can see evidence of low valleys carved into the landscape. They're almost invisible under high Sun, but with low Sun you can see that this landscape, although very heavily cratered (and thus very ancient), still bears the marks of a historical period marked by the action of liquid water flowing across the surface.
In the next installment, we'll fast forward through almost twenty years without a successful Mars mission, to Mars Global Surveyor.
Finally, I'd like to point to a similar (and, characteristically, much more poetical) effort to look at one spot on Mars through all the orbiters over at Cumbrian Sky. |
Juvenile Paget disease
is a very rare condition that affects bone growth. This condition causes bones to be abnormally large, misshapen, and easily broken (fractured). Signs and symptoms usually appear in infancy or early childhood. As bones grow, they become weaker and more deformed. This condition affects the entire skeleton, resulting in widespread bone and joint pain. The bones of the skull tend to grow unusually large and thick, which can lead to hearing loss. The condition also affects bones of the spine (vertebrae), leading to abnormal curvature of the spine. Additionally, weight-bearing long bones in the legs tend to bow and fracture easily, which can interfere with standing and walking. Juvenile Paget disease is caused by mutations in the TNFRSF11B gene and is inherited in an autosomal recessive fashion. Source: Genetic and Rare Diseases Information Center (GARD), supported by ORDR-NCATS and NHGRI. |
It is with sadness that we recognize the passing of Dr. Frederick Sanger. Sanger is known to molecular biologists and biochemists worldwide for his DNA sequencing technique, which won for him the 1980 Nobel prize in Chemistry.
Also noteworthy, Sanger’s laboratory accomplished the first complete genome sequence, that of a viral DNA genome more than 5,000 base pairs in length.
The 1980 prize was Sanger’s second Nobel award, his first awarded in 1958 for determining the chemical structure of proteins. In this work, Sanger elucidated not only the amino acids that comprised insulin but also the order in which the amino acids occurred.
About Sanger Sequencing
Sanger DNA sequencing is also known as the chain-termination method of sequencing. The Sanger technique uses dideoxynucleotides or ddNTPs in addition to typical deoxynucleotides (dNTPs) in the reaction. ddNTPs result in termination of the DNA strand because ddNTPs lack the 3’-OH group required for phosphodiester bond formation between nucleotides. Without this bond, the chain of nucleotides being formed is terminated.
Sanger sequencing requires a single-stranded DNA, a DNA primer (either radiolabeled or with a fluorescent tag), DNA polymerase, dNTPs and ddNTPs. Four reactions are set up, one for each nucleotide, G, A, T and C. In each reaction all four dNTPs are included, but only one ddNTP (ddATP, ddCTP, ddGTP or ddTTP) is added. The sequencing reactions are performed and the products denatured and separated by size using polyacrylamide gel electrophoresis.
This reaction mix results in various lengths of fragments representing, for instance, the location of each A nucleotide in the sequence, because while there is more dATP than ddATP in the reaction, there is enough ddATP that each ATP ultimately instead is replaced with a ddATP, resulting in chain termination. Separation by gel electrophoresis reveals the size of these ddATP-containing fragments, and thus the locations of all A nucleotide in the sequence. Similar information is provided for GTP, CTP and TTP.
The Maxam and Gilbert DNA sequencing method had the advantage at the time of being used with double-stranded DNA. However, this method required DNA strand separation or fractionation of the restriction enzyme fragments, resulting in a somewhat more time-consuming technique, compared to the 1977 method published by Sanger et al.
Dr. Sanger was born in Gloucestershire, U.K. in 1918, the son of a physician. Though he initially planned to follow his father into medicine, biochemistry became his life-long passion and area of research endeavor. Sanger retired at age 65, to spend more time at hobbies of gardening and boating.
Sanger, F. , Nicklen, S. and Coulson, A.R. (1977) DNA sequencing with chain-terminating inhibitors. Proc. Natl. Acad. Sci. USA 74, 5463-7.
Maxam, A.M. and Gilbert, W. (1977) A New Method for Sequencing DNA. Proc. Natl. Acad. Sci. USA
There is something special about seeing the original Sanger publication from 1977, available here as a scan.
Latest posts by Kari Kenefick (see all)
- Made Just for You: Promega Custom Reagents - October 28, 2022
- Real-Time Analysis for Cell Viability, Cytotoxicity and Apoptosis: What Would You Do with More Data from One Sample? - January 25, 2022
- Evidence of Inflammasome Activation in Severe COVID-19 - September 22, 2021 |
Biblical principle is measurable results and rewards relate to accountability: “…reaching forth unto those things which are before, I press toward the mark for the prize of the high calling of God…”
The prime method of learning is achieved by children working through booklets (PACEs) in various subjects, reading supplied texts, filling in blanks in questions, linking words with definitions, writing sentences and short essays, doing simple science experiments, searching dictionaries and atlases, solving mathematical problems, using computers, watching videos and in many other ways.
Learning, in the ACE system, depends upon a number of interdependent factors. In order to explain these factors, the illustration of a donkey pulling a man on a cart is used and a series of ‘laws’ formulated. The following Five Laws of Learning sum up the ACE academic philosophy as illustrated by the donkey and cart:
Law #1: How heavy is the load?
The pupil must be on a level of curriculum where he can perform. Biblical principle: all children are different:”…For unto whomsoever much is given, of him shall much be required…”
Law #2: How long is the stick?
The pupil must set achievable goals he can complete in a prescribed period of time. Biblical principle: reflects good judgment: “For which of you … sitteth not down first, and counteth the cost…?”
Law #3: How effective are the controls?
The pupil must be controlled and motivated to assimilate, use, or experience the material. Biblical principle: necessity for discipline, guidance, and responsible leadership: “Train up a child in the way he should go…”
Law #4: How hungry is the donkey?
The pupil’s learning must be measurable. Biblical principle: motivation is that inner desire prompted by the concerned parent teacher: “Whatsoever ye do, do it heartily as to the Lord…” |
For Home Learning this week:
- Try to read your reading book or another text daily and sign your reading record when you do. Try spotting some of your Phase 5 sounds in your writing if you can!
- Try reading and using our word of the week - devious
- Try the Maths number bond activity saved below.
- Phonics - practise reading words with magic e. You can use phonicsbloom.com to play some Phase 5 games.
- Spellings - The spellings for next Monday are the days of the week again. They were very tricky and I think we need one more week to practise them. Remember to use a capital letter at the beginning of each day and remember that each spelling ends with 'day'. If you already know them, you could look at the months of the year. You could also try ordering the days of the week and months of the year. There are great songs and videos on YouTube to help practise too.
You could try an activity from the Summer 1 enrichment grid too.
Have a great week :) |
(PhysicsToday) Zhenda Xie, Yan-Xiao Gong, and Shi-Ning Zhu of Nanjing University in China havesuccessfully demonstrated the transmission of an entangled state through air from one drone to another.
Using quantum encryption to send confidential information through entangled photons offers much better protection than today’s encryption, but several challenges remain. Until now, researchers have transmitted quantum information as far as a few hundred kilometers using fiber-optic cables and 1200 km using satellite arrays. But constructing a communication network connected by fiber is costly and prone to loss problems; and a network of satellites suffers from low transmission rates and is only usable at night.
Aboard one drone in this study, a pump laser shines on an inorganic crystal that is specially designed for nonlinear optical applications. The process, known as spontaneous parametric down-conversion, generates a pair of lower-energy entangled photons. Once the entangled pair is produced, it’s collimated and sent through a series of wave plates to prepare it for transmission through free space.
The entangled pair is then distributed to a second drone and a ground station known as Alice. The second drone works as an optical relay for the single photons and transmits the quantum information to Alice’s recipient station, Bob.
Because the drones are inexpensive and mobile, the optical relay could comprise more than two drones to transmit information between users who are more than a few kilometers apart. |
Omphalitis is the medical term for a severe infection of the umbilical cord stump. In the United States, omphalitis is very rare, thanks to the infection control procedures in hospitals and routine standards of umbilical cord care.
Yet, these nasty infections do happen in about 1 in 200 newborns. Read on to find out who's at risk and why prompt medical attention is so important.
What causes omphalitis?
The main cause of omphalitis is exposure to any bacteria during delivery, when the umbilical cord is cut after birth or a few days later at home.
Here are some of the main bacteria-causing infections to keep in mind:
- Gram negative bacteria
Thought tetanus can be a cause of omphalitis, it's rare that it's to blame for the condition in the United States. It tends to occur only if, for cultural reasons, dirt or even dung (which can be contaminated with tetanus spores) are applied to the cord during the birth ritual. Infants with neonatal tetanus are likely to get the cord infection from unclean delivery and cord care practices.
Proper home care after delivery can lower the risk of omphalitis, so follow your pediatrician’s guidance carefully.
Who is at risk for omphalitis?
Omphalitis strikes in the days and early weeks following delivery, but it's rarely seen outside of the neonatal period.
While research doesn’t point to babies having an increased risk of infection based on their sex, there are some things that can make a newborn more susceptible, including:
Birth outside the United States or a home birth (in low-resource countries, the risk of omphalitis is up to 8 percent of infants born in hospitals and as high as 22 percent of infants born at home)
Low birth weight
Chorioamnionitis (a bacterial infection of the amniotic membranes and fluid that surround and protect your baby)
- Prolonged rupture of the membranes
If your baby has omphalitis, you will most likely notice the telltale signs of infection (like pus and redness) three to five days after birth in a preemie, and five to nine days in a full-term baby.
What are the signs of omphalitis?
Keep a careful eye on your little one's belly button, checking for inflammation and discoloration around the umbilical cord area during your baby's first few weeks of life.
Here are omphalitis signs and symptoms to watch for:
Pus or a fluid-filled lump on or near the umbilical cord stump
Red skin spreading from around the navel
Cloudy foul-smelling discharge from the infected region
Fever (Caution: Do not give your baby any fever medicine without approval from the pediatrician)
Bleeding around the umbilical cord stump
Irritability, lethargy and decreased activity
Since any of these symptoms can occur early on or late in the infection, it's important to see your pediatrician right away if you suspect a problem.
Sepsis is the most common complication from omphalitis and can progress to something much more serious and harmful, which is why early detection and treatment of omphalitis is so crucial.
How is omphalitis diagnosed?
In most cases, the doctors can diagnose omphalitis simply by sight when examining the baby, or in a photograph.
How is omphalitis treated?
If you suspect that your baby has developed an umbilical cord infection, call your pediatrician as soon as possible.
The standard treatment for omphalitis is hospitalization for a few days to monitor your little one and administer antibiotics that fight the bacteria, including gram negative bacteria, though surgery may be needed in some more serious cases.
As long as you remain alert and act quickly, though, your new arrival should bounce back without complications.
How can you prevent omphalitis?
Both the American Academy of Pediatrics (AAP) and the American College of Obstetricians Gynecologists (ACOG) advise "dry cord care" (aka keeping the stump clean and natural drying without using alcohol or ointments), and it's now common practice in American hospitals.
Prevention strategies include:
Washing hands with clean water and soap before delivery and before cutting and tying the cord
Laying the newborn on a clean surface and cutting the cord with a sterile instrument
Using a clean tie or clamp on the cord
An immunization for tetanus during pregnancy
Note: Some cultures have traditional practices of applying ash, oil or other substances to the umbilical cord, all of which can increase the baby’s risk of infection and omphalitis.
So, remember, keep your baby's belly button clean, dry and exposed to air. If you see any oozing, gently clean it away with a wet cotton swab and dry it carefully (though it's normal for the belly button to ooze a little bit after the cord falls off). Do not use rubbing alcohol.Always talk with your pediatrician if you have any questions or concerns about your baby’s umbilical cord. Thanks to telemedicine, a quick photo is often all your doctor will need to check for omphalitis. |
What is thrush?
Thrush occurs when the fungus Candida albicans grows rapidly on the mucus membranes of the oral cavity. Candida is a found normally on the skin, in the mouth and elsewhere, but bacteria and other normal inhabitants of the body usually keep fungal growth in check. However, if the balance of these usually harmless organisms is altered, Candida can multiply, resulting in fungal overgrowth and several medical concerns, including oral thrush.
What are the causes of thrush?
Oral thrush and other Candida infections can occur when the immune system is compromised by disease or suppressed by medications, or when antibiotics change the normal balance of microorganisms in the body. Prolonged or frequent use of antibiotics can wipe out the “friendly” bacteria that normally keep yeast in check, resulting in thrush. Medical conditions related to the incidence of oral thrush include:
- Chronic mucocutaneous candidiasis, a group of overlapping syndromes that present a pattern of persistent, severe, and diffuse cutaneous candidal infections, usually in the skin, nails and mucous membranes
- Vaginal yeast infections
- Dry mouth
- Drugs that can wipe out intestinal flora or encourage overgrowth of yeast, such as steroids and estrogen, either in the form of birth control pills or hormone replacement therapy
Who is likely to get it?
Infection is common in infants and toddlers whose immune systems are not yet at peak strength. Nursing babies with thrush can spread the infection to their lactating mothers. Thrush is also more likely if you:
- Are elderly
- Have a weakened immune system
- Use corticosteroids, antibiotics or birth control pills
- Wear dentures
What are the symptoms of thrush?
Among the common symptoms of thrush are white lesions on your tongue and inner cheeks and occasionally, the roof of your mouth, gums and tonsils. Signs and symptoms of thrush can include:
- Discomfort when swallowing or having a hard time swallowing
- A feeling of food sticking in your throat or in the mid-chest
- Fever (if the infection moves past your esophagus)
Newborns with oral thrush usually become symptomatic in the first few weeks. They may have white lesions in their mouths, experience difficulty feeding or be cranky and irritable. Mothers whose breasts are infected with Candida may experience symptoms such as:
- Unusually red or tender nipples
- Taut, shiny skin on the areola
- Unusual discomfort when nursing or painful nipples between feedings
- Stabbing pains that feel as though they pierce the breast
How is it diagnosed?
Oral thrush is most often diagnosed by simply having a doctor or dentist examine the white lesions; occasionally a tiny sample is examined under a microscope. If thrush occurs in older children or teens who have no other risk factors, it is advisable to see a doctor, as an underlying condition such as diabetes may be involved. Thrush that colonizes the esophagus can be dangerous. If this happens, a doctor may recommend a throat culture, endoscopic examination, or an upper GI test such as a barium swallow.
What is the conventional thrush treatment?
Over the counter antifungal medications and topical creams are often recommended, as is taking acidophilus or eating foods containing probiotics as thrush treatments. If thrush is found in a nursing baby, keeping nipples and pacifiers well cleansed is also important. For those with HIV/AIDS, prescription antifungal medications such as amphotericin B may be used when other medications do not prove helpful.
What are the therapies Dr. Weil recommends?
In addition to oral or topical antifungal treatments, other natural therapeutic options include taking a proven probiotic product (such as Lactobacillus GG) to help restore normal gut flora, cutting back on refined sugars, avoiding dairy products, and eating one clove of garlic per day, preferably raw. In addition, take probiotics whenever you are taking antibiotics. An herb that may help is thyme, which is approved in Europe for use in upper respiratory infections and is effective against oral thrush.
How is thrush prevented?
Some simple lifestyle changes can help prevent thrush:
- See your dentist regularly and practice good oral hygiene
- Use probiotics (from yogurt or supplements) when you take antibiotics
- Treat any other yeast infections immediately
- Quit smoking
- Limit consumption of processed sugars |
ae.04 Erosional Landforms
- Statement of inquiry
- Scientific and technical innovators need to understand how power affects the processes that occur within systems.
- Factual questions [Remembering facts and topics]
- What is rotational slip? | freeze-thaw weathering / frost shattering? | abrasion? | plucking? | nivation?
- How are Cirque | U Shaped Valley | Hanging Valley | Arête | Pyramidal Peak formed?
- Conceptual questions [Analysing big ideas]
- Is there such a thing as a 'textbook landform'?
- Aims of this lesson
- To have knowledge and understanding of the erosional processes involved in the formation of glacial landforms.
Task 1 - Erosional processes
- Explain each of the following processes (a diagram or series of diagrams may help):
- rotational slip
- freeze-thaw weathering / frost shattering
Note making frame
Task 2 - Landforms created by erosion - What do they look like?
cirque | U shaped valley | hanging valley | arête | pyramidal peak
- Add to your note making frame a description of each of the landforms.
Task 3 - Landforms created by erosion - What do they look like in Google Earth on web?
- Explore each of these placemarks in Google Earth for web. These locations should help you add a geographical examples to your note making frame.
Task 4 - Landforms created by erosion - How are they formed?
- Use these videos (click on the Time for Geography icons) to add notes to the 'Process of formation' column in the note making frame. Try and split the whole process into three steps and have one step of the process on each line. |
Adaptation of Plants to Water Scarcity, Saline & Aquatic Environments:
What is Adaptation?
It is the development of certain features in an organism in response to particular environment which may improve the chances of its survival. The organisms develop various types of morphological, physiological and behavioural adaptations which enable them to take advantage over their environment successfully for increasing their chances of survival. These special features evolve over a long period of time through the process of natural selection. The ultimate aim of adaptation is to seek food for their survival.
Adaptation of Plants to Water Scarcity:
The plants of hot deserts are adapted to survive in dry conditions of soil and high temperatures. The plants which evade dry conditions are known as ephemerals. These have the following adaptations-
- They have deep tap roots that can reach up to the water table in arid climates as in Prosopis (mesquite), palms, and some species of Acacia.
- Stomata are sunken in Nerium oleander.
- Leaves are deciduous, leathery with waxy cuticle in order to perform low respiration.
- In Cacti, leaves are reduced to spines, and stems are modified into fleshy and spongy structures. Some cacti have expandable stems for storing water and have spreading root systems in the surface layer of the soil.
Adaptation of Plants to Saline Environment:
Halophytes are plants of the saline environment that are adapted to grow in a high concentration of salt in soil or water. They occur in tidal marshes and coastal dunes, mangroves, and saline soils.
- Halophytic plants, under hot and dry conditions, may become succulent and dilute the ion concentration of salts with water which they store in cells of stems and leaves.
- Mangroves which grow in marshy conditions can excrete salts through the salt glands on the leaves. Some mangroves can expel salts from roots by pumping excess salts back into soil.
- Dunaliella species (green and halophytic algae found in hyper saline lakes) can tolerate saline conditions by accumulating glycerol in the cells, which help in osmoregulation.
- Avicennia and Rhizophora (red mangrove) have special adaptations like pneumatophores, prop and stilt roots, and vivipary (seeds germinate while on the tree). The presence of pneumatophores (the respiratory roots) helps to take up oxygen from the atmosphere and transport it to the main roots. Prop and stilt roots in many species of mangroves give support to the plants in the wet substratum. Vivipary permits plants to escape the effect of salinity on seed germination.
Adaptation of Plants to Aquatic Environment:
The plant which remains permanently immersed in water is called hydrophytes. They may be submerged or partly submerged and have the following adaptation-
- They have the presence of Aerenchyma (larger air spaces) in the leaves and petioles which help to transport oxygen produced during photosynthesis and permit its free diffusion to other parts including roots located in anaerobic soil. This tissue also provides buoyancy to the plants.
- Eichhornia (water hyacinth) has the presence of inflated petioles which keep the plants floating on the surface of water.
- Roots are poorly developed or absent in free-floating hydrophytes like Wolffia, Salvinia, and Ceratophyllum.
- Various emergent hydrophytes (having leaves projecting above the water surface) have a continuous system of air passage which helps the submerged plant organs to exchange gases from the atmosphere through the stomata in the emergent organs.
- Ecological Succession or Biotic Succession
- Difference between Mitosis and Meiosis
- What kinds of organisms are grouped under Kingdom Protista? Would you consider this kingdom a natural one?
- Mendelism or Mendel’s Principles of Inheritance
- Origin and Evolution of Life |
Cross-Origin Resource Sharing(CORS)
Cross-Origin resource sharing(cors) is a mechanism that describes the additional HTTP headers and provides to the browser to let web application which is running at one origin to get permission to access resources of a server at a different origin. Web applications use Cross-Origin HTTP request when requested resource has a different origin (domain, protocol, port) than its own origin. That means CORS allows a restricted resource to be accessible from another domain. The CORS allow to describe requested information to the server to read using browser.
Same-Origin Resource Sharing
Resources from domain let allow to interact with resources of the same domain but not with resources from different domain. Example: XMLHttpRequest and Fetch API belongs to same domain. For security reasons browser restrict to Cross-Origin HTTP requests.
Sending response and request:
When browser sends OPTION with origin and the value of origin is the domain then this domain become parent page. When page example.com wants to attempt data from service.com then following request header send to the service.example.com.
And the server at service.example.com response with
Access-Control-Allow-Origin: http://www.example.com, allows a particular domain
If Access-Control-Allow-Origin responds with ” * “, allowed to access all domains. This is wildcard policy.
If server does not allow Cross-Origin then browser will show error.
A wildcard policy is perfect when uses same-origin that is page and API’s intended to accessible to everyone.
When domain requesting to interact with another domain resource first domain add following requests in headers
Origin: The browser sends the OPTION request with Origin.
Access-Control-Request-Method: This shows to server which HTTP method will be used when the actual request made. HTTP method request includes GET,POST,DELETE.
syntax for Access-Control-Request-Method:
Access-Control-Request-Method: <method> For example,
Access-Control-Request-Headers: This header request is used when applying preflight request.
Syntax for Access-Control-Request-Headers:
Access-Control-Request-Headers: <header-name>,<header-name>, ………
When second domain adds response to first domain the second domain adds following response headers :
Access-Control-Allow-Origin: The value will be particular domain or * as mentioned above, allows particular domain or all domains respectively.
Access-Control-Allow-Headers: These headers are used to response preflight request this indicates actual request than. These response headers include values Accept, Accept-Language, Content-type, X-pingother
The syntax for Access-Control-Allow-Headers:
Access-Control-Allow-Headers: <header-name>[, <header-name>]*
Access-Control-Max-Age: This header gives the value in seconds how long response to preflight request can be cached for without sending another preflight request.
Access-Control-Allow-Methods: Methods used in reponses header such as GET, POST, OPTIONS, DELETE.
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Expose-Headers: Is indicates which header can be expose as a part of response.
Working of Cross-Origin:
User opens a web page and references another domain.
Users browser creates a connection to second domain by adding “Origin”.
And second domain responses with ‘Access-control-allow-origin’ HTTP header.
If the first domain makes a request then second domain response to the request. |
In this news summary, we will discuss why hairstyle—as in hair length—matters to people; herein, we draw from works by Tanaka (2018) to discuss why a woman’s hair length can be ‘a big deal’—from a scientific perspective. Without conducting a survey, nor searching the literature, a consensus we all can come to is that: hair does matter, regardless of whether or not it should matter. Even medicine attests to the fact that hair does matter: one can argue that the medical specialty of dermatology exists partly because physicians who specialized in this branch of medicine treat conditions that are technically non-pathological, such as baldness. Similarly, the significance of hair is also evident in the field of commerce; across numerous cultures, a person’s attractiveness is in part determined by their hair; hence the mass production of various hair products.
The human face can be described in terms of external and internal facial features. External facial features include the ears and the person’s hairstyle, while internal features of the face include the eyes, nose and mouth. Face perception can be defined as the ability to recognize an individual based on their facial features, and this ability is also known as facial recognition (1). As trivial as remembering or recognizing a person’s face may seem, the underlying neural mechanisms involved in the seemingly easy task are far from trivial; there are complex neuro-biological processes that underlie face perception.
An event-related potential refers to the minuscule voltages produced from the brain in response to sensory, cognitive or motor stimuli. For instance, the presentation of a picture, such as a human face, is exemplary of a sensory stimulus, and an event-related potential would be generated from merely seeing the picture. Electroencephalography is a widely used technique that can quantify an event-related potential, and many studies use this tool to investigate neural activity; moreover, this technique is non-invasive. In electroencephalography, a visual record of an event-related potential can be produced on an electroencephalogram (EEG), which essentially depicts the waveform for an event-related potential; a waveform can be thought of as a graph wherein voltage (in microvolts) is plotted against time (in milliseconds) (2–4). In Figure 1, a waveform for an event-related potential is presented.
Figure 1. Schematic of a waveform for an event-related potential (ERP); this schematic is also essentially an electroencephalogram (EEG). An ERP can be characterized by latency and amplitude; latency simply refers to duration, while amplitude refers to height (for instance, if the P1 in this schematic were to have a larger amplitude, the peak would be taller than shown). The vertical and horizontal axes correspond to the amplitude in microvolts (µV), and time in milliseconds (ms), respectively. The P1 components of an ERP is of a positive voltage and occurs at around 100 ms after exposure to a stimulus the N170 component is of a negative voltage and occurs at around 170 ms after exposure to a stimulus. This schematic was modified from the following source http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.458.6801&rep1&type=pdf.
Previous studies have shown that both the P1 and N170 components of an event-related potential (ERP) are sensitive to facial recognition. The P1 component is believed to be responsive to low-level visual characteristics such as colour and contrast—and human faces possess such characteristics. The sensitivity of the N170 component is specifically sensitive to facial features (5). Furthermore, previous studies have suggested that the N170 component may be more sensitive to external facial features than to internal ones. Thus, the objective of Tanaka (2018) was to investigate whether the latency (i.e., duration) and amplitude of the P1 and N170 components of an event related potential are affected by hair length during face perception.
Twenty-one study participants were exposed to long, medium, and short hairstyles on identical female faces in blocks that were ‘variable-size’ and ‘fixed-size’. The ethnicity of both the faces and study subjects were Japanese; all the faces were presented in grey-scale and were unfamiliar to any of the subjects. In Figure 2, the six kinds of visual stimuli the subjects were exposed to are shown. The variable-size blocks corresponded to the top row, while the bottom row represents the fixed-size blocks. In the top row, the size of each face was identical which made the overall size of the stimuli (i.e., face with hairstyle) different among the three faces (Figure 2). In the fixed-size blocks, the size of the overall stimuli was the same. Participants were exposed to the six kinds of stimuli because Tanaka (2018) wanted to investigate whether sensitivity of the respective ERP components are evoked merely by the hair length, or by the overall size of the stimuli. For instance, if the N170 component showed more sensitivity to long hair than to short hair, it would be unclear whether that response is due to hair length or overall size of stimulus as the overall stimulus size of the long-hair face in the variable-size block is larger than that of the short-hair face in the same type of block. Alternatively, if N170 was more responsive to the long-hair face in the fixed-size blocks than to short hair of the same block type, it would be more logical to attribute the responsiveness to length of hair and not overall size of the stimulus—as the size of stimuli, in the fixed-size blocks, are the same.
Figure 2. Six representative stimuli that were used in the research by Tanaka (2018).
Each of the six stimuli was presented to each subject for 1000 milliseconds (ms); right after stimuli presentation, each study participant was given one second to judge whether each face had long, medium or short hair. EEGs for the three hairstyles in the fixed- and variable-size blocks were produced; latency and amplitude of the P1 and N170 components were compared among the six types of stimuli.
Tanka (2018) showed that the latency of P1 was significantly affected by hair length as the peak of P1 occurred significantly much later for short hair in variable-size blocks compared to that of the other five types of stimuli. The amplitude of P1 was not significantly different among the six types of stimuli.
Similarly, the latency of N170 was significantly affected by hair length as the peak of N170 occurred earlier for long hair in variable-size blocks than for short hair in the same block type. For short hair, the peak of N170 occurred significantly earlier in the fixed-size blocks than in the variable-size blocks. No significant differences were observed among the six types of stimuli in terms of N170 amplitude.
Reflections from Tanaka (2018)
In the study, long hair attracted more attention than short hair as the long-hair face was processed faster; the speedier processing corresponded to the duration of the N170 peak being shorter for long hair than for short hair. Given that humans are social creatures, it is not surprising that people care about how they look to others, and hairstyle is a major item in the description of a person’s appearance. Wearing long hair can be flattering as it could draw other’s attention to one; this could be a reason why, cross-culturally, the use of hair extensions and wigs are a commonplace in today’s society. This could also explain why hair loss problems—that are essentially benign—are among the common concerns brought to a dermatologist; while one may argue that such concerns pertain to non-pathological conditions, they are arguably “pathological” to the complainer’s state of mind. Like in the field medicine and commerce, the non-triviality of hair is also evident in the legal world; typing ‘haircut lawsuit’ in a Google search engine would explain it all. While hair length may not necessarily be a ‘deal-breaker’ in the adjudication of a woman’s attractiveness by everyone, many men and women prefer long hair on women, and discoveries from studies like Tanaka (2018) tell us that there may be a neuro-biological explanation for this preference.
- Rhodes G, Jeffery L. Face Perception. In: Encyclopedia of the Mind. Thousand Oaks: SAGE Publications, Inc.; 2013.
- Sur S, Sinha V. Event-related potential: An overview. Ind Psychiatry J. 2009;
- Woodman GF. A brief introduction to the use of event-related potentials in studies of perception and attention. Atten Percept Psychophys. 2010 Nov;72(8):2031–46.
- Saavedra C, Bougrain L. Processing Stages of Visual Stimuli and Event-Related Potentials. Bordeaux, France; 2012.
- Tanaka H. Length of Hair Affects P1 and N170 Latencies for Perception of Women’s Faces. Percept Mot Skills. 2018; |
Superconducting devices such as SQUIDS (Superconducting Quantum Interferometry Device) can perform ultra-sensitive measurements of magnetic fields. Leiden physicsts invented a method to 3D-print these and other superconducting devices in minutes.
‘Fabricating superconducting devices on a computer chip is a multi-step and demanding procedure, requiring dedicated facilities’, says Kaveh Lahabi, a physicist at Leiden Universty. ‘It usually takes days to complete’,
Lahabi and co-authors have developed a new approach, in which Josephson junctions, essential parts of SQUIDS, can be printed on almost any surface in mere minutes, within an electron microscope.
In this video, Lahabi and co-author Tycho Blom demonstrate their technique and discuss their recent article in ACS Nano.
ACS Nano Article ASAP
This video can not be shown because you did not accept cookies.
You can leave our website to view this video. |
Anatomy of a Drilled Bedrock Well
Using a drill rig, well drillers begin by drilling a hole about 9" in diameter through the overburden sediment overlying bedrock. When bedrock is encountered, drilling continues until competent bedrock is reached, generally between 10 and 20 feet. Steel casing is then installed in this hole and sealed to the bedrock. This casing seals the well from potential contaminants from surface infiltration. Drilling continues through the bottom of the casing until water-bearing fractures are encountered. Ground water fills the well to a level based on local geologic conditions. A submersible pump is then lowered into the well to bring water to the surface. The well casing protrudes out of the ground surface and is covered with a sanitary cap to prevent contamination. The water in the well above the pump is in storage and is available to be pumped out when needed. A bedrock well with low yield can still provide enough water for household use if the well boring itself holds enough water in storage to meet periods of peak demand.
Last updated on October 6, 2005 |
Answered You can hire a professional tutor to get the answer.
I need help creating a thesis and an outline on Biography of El Pipila. Prepare this assignment according to the guidelines found in the APA Style Guide. An abstract is required.
I need help creating a thesis and an outline on Biography of El Pipila. Prepare this assignment according to the guidelines found in the APA Style Guide. An abstract is required. Biography of El Pipila Juan Jose de los Reyes Martinez Amoro is better known to history by his nick “Pipila” (Reding 61). He was a local hero ofthe Guanajuato city in Mexico, who played a major role in Mexican Independence. He was a “miner” in a small mining city who later emerged as a great fearless warrior and inspired many others to participate actively in the “war of independence” (61). During the Mexicans war against Spain, the latter barricaded themselves in a most fortified building of the city which was the city’s grain store with solid stone walls and enough space for archers to repel attackers. Spaniards were safe and secure in that building, as the Mexicans were reluctant to attack being intimidated by defeat echoed through the powerful walls of the building.
However, El Pipila, regardless of his personal safety, rose to the occasion and answered head on to his call of duty. He strapped a large stone on his back, held a flaming torch in his hand and set on his mission to wreak havoc on the cruel oppressors. After he reached the building, he crawled towards its “wooden doors,” the only known weak spot of the building, and set it on fire allowing the insurgent army to enter the building and claim their first victory of the struggle for independence (61). Mexico was one of the richest and most important colonies of Spain’s kingdom during that time. Being under the territory of New Spain, Mexico was the main source of gold and silver for the Spaniards and contributed highly for the emergence of Spain as the great power then.
During the Spanish rule, the people of Mexico were treated as slaves and third class individuals. The condition of the people influenced a number of progressive personalities to think about gaining independence from Spain and the “need for political reform(s)” to make the city a new free country (66). Amongst these people, was a non-traditional catholic priest, “Miguel Hidalgo,” who did a lot of work for the betterment of the native population (61). It was in the year 1810 that the struggle for independence was initiated. Hidalgo gathered people, mostly peasants and a few trained officials to form the “Insurgent Army,” which marched for a number of days through the region gaining more people and resources for the struggle (61).
Once they were all set, they planned to attack the biggest and the most crucial Spaniard post in the zone, the world famous mining town of Guanajuato, the city of the great El Pipila. The famous Mexican independence hero bravely helped his fellow members of the insurgent army to gain their first victory in their struggle for independence. On the streets of the colonial city of Guanajuato, stands the statue of El Pipila, built in the honor of this great Mexican who played a crucial role in the attainment of independence. With his “torch held high,” Pipila’s statue is “Mexico’s ‘Statue of Liberty,” a token of history that resonates the glorious triumph of Mexicans claiming their rightful freedom from Spaniards (61).
Reding, Andrew. The Next Mexican Revolution. 1996. World Policy Journal, 13/3. pp. 61-70. Web. 24 February 2013. |
This also means that blowing out candles in space is four times easier than on Earth — a finding that more than justifies the Shuttle flights.
Here is the explanation from some smart people:
[A] candle can burn in zero gravity. However, the flame is quite a bit different. Fire behaves differently in space and microgravity than on Earth.
A microgravity flame forms a sphere surrounding the wick. Diffusion feeds the flame with oxygen and allows carbon dioxide to move away from the point of combustion, so the rate of burning is slowed. The flame of a candle burned in microgravity is an almost invisible blue color (video cameras on Mir could not detect the blue color). Experiments on Skylab and Mir indicate the temperature of the flame is too low for the yellow color seen on Earth. |
By Menno Lauret (in collaboration with Gert Witvoet, Federico Felici, Tim Goodman, Oliver Sauter, Gerd Vandersteen, Marco de Baar and the TCV team)
Increasing energy demands and depleting resources is one of the major challenges that humanity has to face. Moreover, the waste products of energy plants like CO2 or long term radioactive material lays a heavy burden on future generations.
Among the many alternatives for coal and nuclear fission plants, nuclear fusion is a unique approach to generate clean and reliable energy. Essentially, fusion energy uses the same physical mechanism that makes the sun scorching hot. If small and very hot sun-like plasma’s (ionized gas, with a temperature of around 100 million degrees Kelvin) could be reliable made on earth, this could be used for the generation of clean and reliable energy.
Physicists working on fusion energy have made great progress in the last 50 years. However, there are still some processes in the plasma that need to be solved before energy can be produced using this method.
The Control Systems Technology group at the Eindhoven University of Technology and the Dutch plasma fusion institute FOM DIFFER are actively involved in develloping controllers for magnetohydrodynamic modes in the plasma. One of these is the sawtooth instability, that leads to a periodic reorganistation of the core of the fusion plasma, and could affect the stability of the plasma as a whole.
The sawtooth period can be interpreted as the plasma’s heartbeat, and is considered essential for a lot of processes in the plasma. In current fusion experiments (so-called Tokamaks) the sawtooth beats around 100 times per second. To ensure the functionality of future fusion energy plants, it will be necessary to be able to regulate this heartbeat: to make it go faster or slower.
A surprisingly simple and new concept for controlling the sawtooth period mimics the way a pacemaker regulates the human heart beat. A traditional pacemaker gives a periodic electrical pulse to the heart, to which the heart synchronizes. To translate this idea to a Tokamak experiment, we could not use small electrical pulses, but a microwave source of several megawatts was available.
Joint experiments at the TCV experiment in Lausanne, Switzerland (in collaboration with EPFL, CRPP and VUB) showed that this simple idea works better than we could have imagined. The sawtooth period almost instantaneously changes to the period of the power pulse and we were able to make the sawtooth period four times slower. These experiments and simulations confirm that there is a surprisingly simple method to control the heartbeat of the plasma. Nuclear fusion plasma’s have very complex behavior, just as many biological systems, and biology can be a source of inspiration to make progress in fusion and hopefully to solve other related plasma problems.
The work is explained in detail in two articles, Demonstration of sawtooth period locking with power modulation in TCV plasmas and Numerical demonstration of injection locking of the sawtooth period by means of modulated EC current drive. |
The Museum’s educational efforts seek to promote sustainable teaching and learning about the Holocaust and genocide in a region where accurate information about the Holocaust is generally difficult to find and often politicized. The Museum works with partners to reach educators, schools, universities, and religious and civil society leaders with programming in several languages. Learn more about the Museum’s work in the Middle East and North Africa below.
Download poster sets, watch films, and discover more ways to learn about the Holocaust and antisemitsm.
Developing Educational Workshops
Mimouna is a non-governmental organization started by primarily Muslim university students dedicated to preserving Moroccan Jewish heritage and culture. It has organized several workshops with the Museum in Morocco for students, educators, and social media influencers. These workshops provide participants with information on Nazi racial ideology, propaganda, and totalitarianism. The Museum and Mimouna produced the first Arabic-language guide on Morocco and the Holocaust and have jointly promoted a Museum produced short animated film in Arabic on Nazi racial ideology.
The Museum and the Laboratory of Heritage at Manouba University, and other institutions in Tunisia, have developed a project on the history of Nazi propaganda linked to the exhibition State of Deception: The Power of Nazi Propaganda, initially created by the Museum. The project also addresses the history of Tunisia during the time of the German occupation and the history of the Tunisian Jewish community through exhibitions, film screenings and book presentations, as well as workshops for educators and students.
The Museum partners with Iranian-Canadian journalist and filmmaker Maziar Bahari and IranWire.com, to create and distribute Persian-language multimedia content online and through social media. These efforts seek to counteract official Holocaust denial, antisemitic rhetoric, and restrictions on accurate information about the Holocaust. Recent projects include:
- Abdol Hossein Sardari: An Iranian Hero of the Holocaust: A short animated film about Abdol Hossein Sardari, an Iranian diplomat who saved Jews while based in Paris during World War II. It is available in English and Persian.
- Crime and Denial: A 30-minute documentary introducing Iranians to Holocaust history in light of official denial in Iran. It is available in English and Persian.
The Museum has also been a leading voice in opposition to Iranian government efforts to deny, distort and mock the Holocaust.
Creating Resources for Educators
The Museum partnered with SEHAK to strengthen the resources available for Turkish educators on the Holocaust, including an online resource in Turkish, a Holocaust timeline activity, and a translation of the Museum’s Justice and Accountability video. |
The Four Stages of Learning is a theory modeling how we learn a new skill. The stages are:
Usually, students struggle to attain the fourth stage. It’s the difference between being able to do something the right way, and having that thing so ingrained that you can’t do it wrong. I explain it to my students this way: “Play it until I could come over to your house at two in the morning, wake you up, and you could still play it for me!” The main way to elevate any new technique is through mindful repetition. At first, your child will need to fully focus on the skill, but eventually they should be able to focus on the feeling of playing.
This idea is most useful for small, easily attainable goals. Each goal should be focused on one at a time, and every repetition should be precise! This way, only the right skills get to the fourth level.
- Shannon Jansma, published in the May 2016 issue of the Ann Arbor Suzuki Institute newsletter |
Sharks bite roughly 70 people each year worldwide, with perhaps 5-10 fatalities, according to data compiled in the International Shark Attack File (ISAF). Although shark bites get a lot of attention, this is far less than the number of people injured each year by elephants, bees, crocodiles, lightning or many other natural dangers. On the other side of the ledger, we kill somewhere between 20-100 million sharks every year through fishing activities.
Of the more than 500 or so shark species, about 80% grow to less than 1.6 m and are unable to hurt people or rarely encounter people. Only 32 species have been documented in biting humans, and an additional 36 species are considered potentially dangerous.
Almost any shark 1.8 m or longer is a potential danger, but three species have been identified repeatedly in fatal bites: great whites, tigers, and bull sharks. All three are found worldwide, reach large sizes and eat large preys such as marine mammals or sea turtles. More bites on swimmers, free divers, scuba divers, surfers and boats have been reported for the great white shark than for any other species. However, some 80% of all shark bites probably occur in the tropics and subtropics, where other shark species dominate and great white sharks are relatively rare.
An estimated 50-80% of all life on earth is found under the ocean surface and the oceans contain 99% of the living space on the planet. Less than 10% of that space has been explored by humans. 85% of the area and 90% of the volume constitute the dark, cold environment we call the deep sea. The average depth of the ocean is 3,795 m. The average height of the land is 840 m.
“Currently, scientists have named and successfully classified around 1.5 million species. It is estimated that there are as little as 2 million to as many as 50 million more species that have not yet been found and/or have been incorrectly classified.”
According to World Register of Marine Species (WoRMS) there are currently at least 226,408 named marine species (9/24/2014).
So, there are at least 226,408 marine species but there are most likely at least 750,000 marine species (50% of 1.5 million species) and possibly as many as 25 million marine species (50% of 50 million species).
The oceans cover 71% (and rising) of the Earth’s surface and contain 97% of the Earth’s water. Less than 1% is fresh water, and 2-3% is contained in glaciers and ice caps (and is decreasing).
90% of all volcanic activity occurs in the oceans.
The speed of sound in water is 1,435 m/sec – nearly five times faster than the speed of sound in air.
The highest tides in the world are at the Bay of Fundy, which separates New Brunswick from Nova Scotia. At some times of the year the difference between high and low tide is 16.3 m, taller than a three-story building.
Earth’s longest mountain range is the Mid-Ocean Ridge more than 50,000 km in length, which winds around the globe from the Arctic Ocean to the Atlantic, skirting Africa, Asia and Australia, and crossing the Pacific to the west coast of North America. It is four times longer than the Andes, Rockies, and Himalayas combined.
The pressure at the deepest point in the ocean is more than 11,318 tons/sq m, or the equivalent of one person trying to support 50 jumbo jets.
The top ten feet of the ocean holds as much heat as the entire atmosphere.
The lowest known point on Earth, called the Challenger Deep, is 11,034 m deep, in the Marianas Trench in the western Pacific. To get an idea of how deep that is, if you could take Mt. Everest and place it at the bottom of the trench there would still be over a mile of ocean above it. The Dead Sea is the Earth’s lowest land point with an elevation of 396 m below sea level.
Undersea earthquakes, volcanoes and landslides can cause tsunamis (Japanese word meaning “harbor wave”), or seismic sea waves. The largest recorded tsunami measured 60 m above sea level caused by an 8.9 magnitude earthquake in the gulf of Alaska in 1899 traveling at hundreds of km/hr.
The average depth of the Atlantic Ocean, with its adjacent seas, is 3,332 m; without them it is 3,926 m. The greatest depth, 8,381 m, is in the Puerto Rico Trench.
The Pacific Ocean, the world’s largest water body, occupies a third of the Earth’s surface. The Pacific contains about 25,000 islands (more than the total number in the rest of the world’s oceans combined), almost all of which are found south of the equator. The Pacific covers an area of 179.7 million sq km.
The Kuroshio Current, off the shores of Japan, is the largest current. It can travel between 40-121 km/day at 1.6-4.8 kph, and extends some 1,006 m deep. The Gulf Stream is close to this current’s speed. The Gulf Stream is a well known current of warm water in the Atlantic Ocean. At a speed of 97 km/day, the Gulf Stream moves a 100 times as much water as all the rivers on earth and flows at a rate 300 times faster than the Amazon, which is the world’s largest river.
A given area in an ocean upwelling zone or deep estuary is as productive as the same area in rain forests, most crops and intensive agriculture. They all produce between 150-500 grams of Carbon per square meter per year.
The sea level has risen with an average of 10-25 cm over the past 100 years and scientists expect this rate to increase. Sea levels will continue rising even if the climate has stabilized, because the ocean reacts slowly to changes. 10,000 years ago the ocean level was about 110 m lower than it is now. If all the world’s ice melted, the oceans would rise 66 m.
The density of sea water becomes more dense as it becomes colder, right down to its freezing point of -1.9°C unlike fresh water which is most dense at 4°C, well above its freezing point of 0°C. The average temperature of all ocean water is about 3.5°C.
Antarctica has as much ice as the Atlantic Ocean has water.
The Arctic produces 10,000-50,000 icebergs annually. The amount produced in the Antarctic regions is inestimable. Icebergs normally have a four-year life-span; they begin entering shipping lanes after about three years.
Air pollution is responsible for 33% of the toxic contaminants that end up in oceans and coastal waters. About 44% of the toxic contaminants come from runoff via rivers and streams.
Each year, three times as much rubbish is dumped into the world’s oceans as the weight of fish caught.
Oil is one of the ocean’s “greatest” resources. Nearly one-third of the world’s oil comes from offshore fields in our oceans. Areas most popular for oil drilling are the Arabian Gulf, the North Sea and the Gulf of Mexico.
Refined oil is also responsible for polluting the ocean. More oil reaches the oceans each year as a result of leaking automobiles and other non-point sources than the oil spilled in Prince William Sound by the Exxon Valdez or even in the Gulf of Mexico during the Deepwater Horizon/BP oil spill.
The record for the deepest free dive is held by Jacques Mayol. He dove to an astounding depth of 86 m without any breathing equipment.
A mouthful of seawater may contain millions of bacterial cells, hundreds of thousands of phytoplankton and tens of thousands of zooplankton.
The Great Barrier Reef, measuring 2,300 km in length covering an area more extensive than Britain, is the largest living structure on Earth and can be seen from space. Its reefs are made up of 400 species of coral, supporting well over 2,000 different fish, 4,000 species of mollusc and countless other invertebrates. It should really be named ‘Great Barrier of Reefs’, as it is not one long solid structure but made up of nearly 3,000 individual reefs and 1,000 islands. Other huge barrier reefs include the barrier reefs of New Caledonia, the Mesoamerican (Belize) barrier reef, and the large barrier reefs of Fiji. The largest coral atoll complexes occur in the Maldive-Lakshadweep ecoregion of the central Indian Ocean and in Micronesia.
Fish supply the greatest percentage of the world’s protein consumed by humans and most of the world’s major fisheries are being fished at levels above their maximum sustainable yield; some regions are severely overfished.
More than 90% of the trade between countries is carried by ships and about half the communications between nations use underwater cables.
Swordfish and marlin are the fastest fish in the ocean reaching speeds up to 121 kph in quick bursts; bluefin tuna (Thunnus thynnus) may reach sustained speeds up to 90 kph.
Blue whales are the largest animals on our planet ever (exceeding the size of the greatest known dinosaurs) and have hearts the size of small cars.
Oarfish (Regalecus glesne), are the longest bony fish in the world. They have a snakelike body sporting a magnificent red fin and can grow up to 17 m in length! They have a distinctive horse-like face and blue gills, and are thought to account for many sea-serpent sightings.
Many fish can change sex during the course of their lives. Others, especially rare deep-sea fish, have both male and female sex organs.
One study of a deep-sea community revealed 898 species from more than 100 families and a dozen phyla in an area about half the size of a tennis court. More than half of these were new to science.
Life began in the ocean 3.1 billion to 3.4 billion years ago. Land dwellers appeared approximately 400 million years ago, relatively recently in geologic time.
Because the architecture and chemistry of coral is so similar to human bone, coral has been used to replace bone grafts in helping human bone to heal quickly and cleanly.
Got an interesting fact about marine life you’d like us to add to the above? Post it in the comments below! |
Haniwa are the unglazed terracotta rings, cylinders, and figures of people, animals, and houses which were deposited at Japanese tombs during the Kofun and Asuka Periods (c. 250-710 CE). The exact purpose of these offerings is not known, although it seems likely they were examples of conspicuous consumption of the societal elite or performed some protective function. Many haniwa are particularly detailed in their execution and thus provide a valuable insight into the culture of the period. Standing over one metre in height, the mysterious figures are a striking example of early Japanese sculpture.
These ritual objects were buried with the deceased interred in kofun (tumuli) tombs throughout the Yamato Period of ancient Japan from the 3rd century CE to the early 8th century CE. The period is often subdivided into the Kofun Period (c. 250-538 CE) and the Asuka Period (538-710 CE). Early tombs were more modest, but by the 5th century CE, they had developed into huge earth mounds. The practice spread throughout Japan so that the islands boast some 20,000 burial mounds known today. They follow a similar design with a keyhole-shaped inner tomb covered by an earth mound and surrounded by a moat. The biggest is the tomb of Emperor Nintoku which is 823 metres long. The earliest examples of haniwa discovered are concentrated in the Nara region. Unusually for votive offerings, haniwa were not placed within the tomb itself but near it or on top of it in either a circle or rows.
The precise significance of these offerings is unclear, but in all probability, they substituted real objects and displayed the wealth and status of the individual interred in the large tumuli which were themselves reserved for the society's elite, such was the labour needed to build them. The Nihon Shoki ('Chronicle of Japan' and also known as the Nihongi), written in 720 CE, suggests that haniwa were replacements for the chieftain's attendants, it being a common feature of ancient societies to bury the slaves of a ruler with their master. This does not, however, explain those haniwa which are simply rings or cylinders, which constitutes the majority. Although it may be that the cylinders were originally meant as stands for the more elaborate figures.
Another theory is that haniwa acted as protective spirits for the deceased and ensured the tomb was not disturbed. Certainly, Korean tumuli in the contemporary Silla kingdom, which may well have influenced Japanese culture, employed stone sculptures of animal signs for just that purpose. Finally, the haniwa may have protected not the dead but the living from the spirit of the deceased chief, ancestor worship and a reverence for spirits or kami being long-held beliefs in ancient Japan.
Shapes & Forms
The name haniwa means 'clay ring' but the coarse red terracotta objects today given that label represent a wide range of figurines of people and animals, and besides the simpler ring types and common cylinder forms, there are also representations of houses, fishing boats, and trading ships. Amongst the most intricate haniwa are human figures representing female shamans with elaborate headdresses or holding a mirror, helmeted warriors wearing armour, horseriding warriors with bows and arrows, women carrying babies on their backs or water vessels on their heads, farmers wielding hoes, musicians playing a drum or zither, and falconers with their hawk. Another common type is riderless horses with intricate saddles and bridles. Other animals include birds, dogs, deer, monkeys, rabbits, and sheep. Most haniwa are just under one metre in height but some are over 1.5 metres tall.
This content was made possible with generous support from the Great Britain Sasakawa Foundation. |
Child Creativity Picture Credit: seniorcorrespondent.com
Fuelled by big-data, scientists are training AI-based models that help provide recommendations to human decision makers in a variety of domains. These AI-driven decision-making processes include domains such as credit decisions, employee hiring or more familiar domains such as personalized shopping and music recommendation. When it comes to raw intelligence, AI is a great social equalizer as machine-intelligence is becoming a commodity accessible to everybody. No longer do individuals require a high-level academic degree to access, analyse, and digest huge volumes of data. The competitive advantage that has historically been tied to high IQ levels has decreased in importance as AI has levelled the intellectual playing field. As the relative importance of IQ has declined, creativity and imagination have gained value and esteem. A 2016 World Economic Forum report concluded that creativity will be the 3rd most important skill in the workplace in 2020, raising in importance from 10th position in 2015. Creativity is clearly the skill of the future.
We are born with creativity
Children are highly creative, and they naturally express their creative talents. However, they cannot use their talents and creativity for something productive and useful that can contribute to the development of human civilization. Dr. George Land, senior fellow of the University of Minnesota and fellow of the New York Academy of Sciences who died in 2016, developed a test to assess the creativity of 1,600 children ranging in ages from three-to-five years old. He later re-tested the same children at 10 years, and again at 15 years of age. Passing the creativity test showed the following age distribution:
- 98% of children passed at age 5
- 30% of children passed at age 10
- 12% of children passed at age 15
- 2% of 280’000 adults passed the test
“What we have concluded,” George Land wrote, “is that adults are ‘grown up children’ who have lost their creativity because creativity has been buried by rules and regulations. Our educational system was designed during the Industrial Revolution over 200 years ago training us to be good workers and to follow instructions. Since education is still based on the principles of the industrial age, the outcomes are not suitable for our 21st century world, where rules and recommendations are increasingly processed by intelligent machines.” Our society needs individual talent who is highly creative and who can innovate. While life-long learning is an accepted fact to sustain the workforce, few discuss the reasons for the lack of creativity in adults. To get there we need a fundamental shift in the positioning of creativity vis-à-vis intelligent machines, revamping our educational system starting in kindergarten all the way to the university level.
The impact of creativity
In the age of AI, our great distinguishing capacity vis-à-vis intelligent machines is creativity. A recent PwC report explains, “The rise of artificial intelligence is driving a new shift in value creation focused on sentiments more intrinsic to the human experience: thinking, creativity and problem-solving.” Creative expert and anthropologist Dr. Michael Bloomfield envisions tomorrow’s new reality as follows: “The typical CV or resumé five years from now might look quite different compared to today’s CV. Your experience and competence will count for a whole lot less if a computer program, just released, can do most of your work at a fraction of the cost. You might want to state where you got your creativity training, what level you reached and what creative ideas you have had as a result”. Creativity is our best defence against obsolescence by AI. As we are incentivized and rewarded for creativity, it is inevitable that we will increase our efforts to enhance our own imaginative and original thought processes. Given that research has shown that creativity can be learned, it is only a matter of time before we see soaring levels of creativity transpire. While AI has enormous potential to help us in supporting creativity, 75% of people believe they are not living up to their creative potential. We have been focusing on whether AI will overtake humans. Instead, we ought to focus on determining how to work together with AI. With the right perspective, AI will amplify and augment our creative potential and as a result lead to better decisions.
Creativity in a corporate setting
Today – to live up to the challenges of a global society – we need creative leaders, creative employees, creative entrepreneurs. Many organizations try to catch up with addressing this challenge but there is a need for a more comprehensive approach for shaping and leveraging creativity. The very first step to do so is to take it for granted that everyone is creative. However, not everyone is creative in the same way. Every person has an innate inclination to be creative in one area, while they might not want to get creative in another. The challenge for leaders and managers is to discover the creativity potential for each person and help them develop it further. At the same time, we need to let go of the image of a lonely creator hiding in an ivory tower of his genius. While there are still pockets of creativity where this might be the case, in most creative endeavours you will need a creative team collaborating to solve a problem at hand. As much as it is vital to develop individual creativity, it is as vital to advance the collaborative skill in any single person. Only the combination of the two increases chances for true innovation. Creativity is a skill that can be developed and a process that can be managed. It begins with a foundation of knowledge, learning a discipline and mastering a way of thinking. We learn to be creative by experimenting, exploring, questioning assumptions, using imagination and synthesizing information.
The process to enhance creativity
The creativity process considers alternative options based on a well-defined purpose, stepping through the following procedure:
- Problem definition: Figuring out the problem one is trying to solve. Many assume this stage to be simple – we think we know the problem. Research shows that successful creative people spend a significant amount of time in defining the problem which, as a result, creates better solutions.
- Ideation: Putting forward various ideas for solutions to a problem at hand. Having many ideas does not mean that they are all good. The crucial concept of ideation is convergent thinking, being able to select the good ideas from the bad ones.
- Validation: Testing and validating the result of the creative process. How do you evaluate ideas that are utterly new? One way might be to first expose the idea to a group of early adopters who, by their sheer urge to look for novelty, are more apt to judge the potential of an innovative idea.
Adding AI to the creative decision-making process
Decision-making often implies anything but logic, leading to bad decisions with no coherent framework to follow. Adding creativity as a way to improve decision-making involves three components: expertise, creative-thinking and motivation as depicted by the following graph (Source: T.M. Amabile, Harvard Business Review, Oct. 98):
While expertise and extrinsic motivation can be supported by the application of AI, creative-thinking, sometimes also referred to as ‘Design-Thinking’, implicates a human quality of creativity which so-far cannot be replaced by intelligent machines. Applying creativity in decision making is a fundamental human process as we interact with the world. One way to characterize decisions is to differentiate between structured, semi-structured or unstructured decisions. Structured decisions have a known precise solution and do not require AI-based decision support. Semi-structured problems have some agreed-upon decision parameters and yet require human preferences for a decision. In this case, optimally balancing machine intelligence with human creativity, is likely to produce the best result. Unstructured decision problems have no-agreed-upon criteria or solution and rely on the preferences of the decision maker. There is no specific problem to be solved, hence the full potential of human creativity can be used to uncover new insights so far untouched by existing knowledge or experiences. Artists, for example, create their works with this mindset. Some painters or musicians resort to AI as a tool to express their creative ideas. There is an ongoing debate if the results should really be considered art as an expression of human creativity.
Fuelled by continuing advances in computational neuroscience and AI research, the debate comparing machine creativity with human creativity is likely to accelerate. Creativity is one of the defining features of human beings. The capacity for genuine creativity, the kind of creativity that updates our understanding of the nature of being, that changes the way we understand what it is to be beautiful or good or true—that capacity is at the heart of what it is to be human. However, in respect to the practical implications of this debate, one also needs to consider the socioeconomic positioning of creativity. In the early 20th century, Harvard Professor Joseph Schumpeter introduced the economic theory of creative destruction, to describe the way in which old ways of doing things are endogenously destroyed and replaced by the new. Creativity and associated decision making play a vital role in the recombination of elements to produce new technologies, products and services that lead to economic growth. The rise of Start-Ups and Spin-Offs is just one indicator of a paradigm shift that is driven by creativity and innovation. |
While astrophysical jets are often powered by black holes, high-speed plasma flows are also ejected by solar flares and can even arise in the Earth’s magnetosphere. The four Cluster spacecraft have been lucky to observe one of the latter events from inside the plasma flow and witness jet-braking and plasma-heating processes.
The story of the Cluster mission to study the Earth’s magnetosphere and its environment in three dimensions is long and tumultuous. First proposed to the European Space Agency (ESA) in November 1982, the four identical satellites, to be flown in a tetrahedral configuration, should have benefited from a “free” launch on the first test flight of the Ariane-5 rocket. Unfortunately, this flight lasted just 37 s and ended abruptly by breaking up during launch on 4 June 1996. To recover at least part of the 10-year development effort, ESA decided to build one additional Cluster satellite named Phoenix, named after the mythical bird reborn out of its ashes. It soon became apparent that the scientific objectives would not be met by Phoenix alone and that a second Ariane-5 launch would be too expensive. Eventually, in the summer of 2000, all of the obstacles had been overcome and four new Cluster satellites were successfully carried into space, two at a time by Russian Soyuz rockets.
Eleven years after the launch, the Cluster mission is still operating, providing insights into the physical processes involved in the interaction between the solar wind and the magnetosphere of the Earth. These interactions often send electrons and ions to the Earth’s magnetic poles, where they hit neutral gas in the atmosphere and produce aurorae. This occurs either by direct entry of solar-wind particles through the polar cusps or by plasma acceleration in the magnetotail during substorms. The magnetotail is located on the night side of the Earth, where the planet’s magnetic field is drawn out into a long tail by the solar wind. It hosts in its centre the plasma sheet, a large reservoir of particles with ion temperatures of about 50 million degrees. When magnetic reconnection occurs in the magnetotail, the plasma sheet is energized and jets are created.
On 3 September 2006, the four Cluster satellites happened to fly through the magnetotail at an altitude of roughly a quarter of the Earth–Moon distance, just in time to witness the sudden rearrangement of the magnetic field leading to the explosive release into the plasma of much of the stored magnetic energy. The instruments aboard the four Cluster satellites monitored the flux of energetic particles focused along the magnetic field lines into a jet pointing towards the Earth. These observations and their implications are now published in Physical Review Letters by a team from the Swedish Institute of Space Physics, Uppsala, and the Mullard Space Science Laboratory, University College London.
The data indicate that the original, fairly “cold” jet was subsequently heated by a separate mechanism similar to friction. At first, the flow’s interaction with other particles and the enhanced magnetic field closer to Earth caused the front of the jet to slow down. This led to a pile up of the magnetic field in the plasma and to further heating and acceleration of the electrons. The process is called betatron acceleration in reference to the particle accelerators developed in the early 1940s, which used a variable electromagnetic field to accelerate electrons circling in a toroidal vacuum chamber. As Yuri Khotyaintsev, the lead author of the study, points out, this process is likely to occur in other types of astrophysical jets whenever they are interacting with the local environment and braking. So, not only shocks but also the pile-up of the magnetic field at the jet front can result in particle acceleration and heating. |
Friday, 20 August, 2010
Road signs and traffic signals on DNA
Physical model describes the distribution of nucleosomes
The DNA genomes of organisms whose cells possess nuclei are packaged in a highly characteristic fashion. Most of the DNA is tightly wrapped around protein particles called nucleosomes, which are connected to each other by flexible DNA segments, like pearls on a necklace. This arrangement plays a major role in deciding which genes are actively expressed, and thus which proteins can be synthesized in a given cell. The LMU Munich biophysicists Professor Ulrich Gerland and Wolfram Möbius have recently developed a model which explains the distribution of nucleosomes around the functionally crucial transcription start sites. Transcription is the first step in the process that converts genetic information into proteins. At the transcription start sites the DNA must be free of nucleosomes. The two researchers discovered that distinct stop signals positioned on either side of these zones must actively prevent the formation and sliding of nucleosomes. “Our model provides a useful tool for dissecting the so-called chromatin code, which determines how the DNA is packed and selectively made accessible for transcription”, says Gerland. (PLoS Computational Biology, 19 August 2010)
Presseinformation der LMU (deutsch)
Press information LMU (english)
Publication "Quantitative test of the barrier nucleosome model for statistical positioning of nucleosomes up- and downstream of transcription start sites" |
Technology tools are a dime a dozen, and there are a lot of websites that showcase technology tools that fit the SAMR model. However, "it's not about the technology tool that defines the SAMR model, but rather it's how the teacher uses the technology tool in a lesson to promote student ownership" For example watch the iPad commercial where the teacher assigns gravity for homework. The students are at the modification level of the SAMR model. By allowing them to take ownership of how they would like to present the content, the students were able to use technology to redesign the homework task.
Another point that I like to share out with the teachers is the fact the SAMR model does not have to be a ladder that you climb but instead think of the SAMR model as a swimming pool. Depending on the task, the amount of time, and the technology tool that the learner picks might move the technology integration from the shallow end to the deep end of the SAMR model.
Examples of the SAMR Model
Let's start by talking about Substitution. Using technology at the substitution stage is better than the devices staying in the cart or never leaving the student's backpack. Yes, it is true, I have been in schools that have gone 1 to 1 Mac Books or iPads and within a 45 minute class period, the students never touched their devices. For the following examples, I am going to talk about how to use the devices for taking notes.
How does substitution look in a blended learning classroom? Let's take a view of J. Moran's sixth-grade science classroom. The blended learning lesson has four main activities that the students have to travel through during the class period.
More and more teachers are moving or starting at the Augmentation stage of the SAMR model because that most schools are either using Google Suite or OneNote . Now instead of the students just typing a paper in Pages or Word, the students can type the document in Google Doc or OneNote and receive instant feedback on the paper from their peers and teachers.
What does augmentation look like in a blended learning classroom? The video below showcases two social studies classrooms working together on generating a paper that relates to a given topic along with working on projects that relate to the given theme. H. Grunenberg eighth grade social studies from Kirtland Middle School class joined up with the seventh-grade social studies classroom to complete a common themed unit.
Mini Lesson - with one of the teachers
Indepdent practice - the students are collaborating on writing a paper on a given topic
Digital Content - researching, watching, and learning more about the topic
Future Ready Skills - developing project that goes along with the given topic
Mini Lesson - with the other teacher on where they are and where they need to go next
Now we are moving above the line in the SAMR model or like what I like to talk about moving towards the deep end of the swimming pool. Modification, now allows the learner to redesign the task by using the technology in a new form. The tech tools listed below are sample note taking tools that move the outcome of a project to the modification stage.
The blended learning lesson below is an example from a ninth grade ELA classroom where I was a co-teacher. Together the teacher and I were having the students generate their own website that will turn into a portfolio of artifacts from class projects throughout the four years of high school. The students had a checklist of different learning activities that they need to complete at their own pace, place, and path that all relate to the building of the portfolio website. (The 10 minutes listed on the checklist is a suggested time frame)
According to the Eduction Technology and Mobile Learning blog post, the redefinition stage of the SAMR model the technology is transformed in a way to create new learning task otherwise not previously established. When explaining this level to educators, I often talk about breaking down the walls of the classroom through MysterySkype, Zoom, Blogs, YouTube, and even Podcast.
The blended learning example of technology being used in the redefinition level is being showcased below from the Mentor High School Fine Arts classroom by H. Ambrus. H. Ambrus is using Seesaw as a portoflio tool to show progress of artwork and then invites outside professional artist, other students, and teachers to comment on the progress of the artwork. The feedback provides students with a guide to improve upon the artwork before the final piece is posted.
Marcia Kish - Blended and Personalized Learning coach that designed the Three Phases of Blended Learning |
In this module, students study how an author develops point of view and how an author’s perspective, based on his or her geographic location, is evident in his or her writing. Students consider point of view as they learn about ocean conservation and the impact of human activities on life in the oceans. Through close reading, students will learn multiple strategies for acquiring and using academic vocabulary. In Unit 1, students read the first five chapters of Mark Kurlansky’s World without Fish, a literary nonfiction text about fish depletion in the world’s oceans. They analyze how point of view and perspective is conveyed in excerpts of the text and trace the idea of fish depletion in both the main text and the graphic novel at the end of each chapter to describe how the idea is introduced, illustrated, and elaborated on in the text.
In Unit 2, students read Carl Hiaasen’s Flush (830L), a high-interest novel about a casino boat that is polluting the ocean and the effort of a family to stop it. As they read the novel, students also will read excerpts of an interview with Carl Hiaasen to determine how his geographic location in Florida shaped his perspective and how his perspective is evident in his novel Flush. At the end of Unit 2, having read the novel, students will write a short, on-demand response explaining how living in Florida affected Carl Hiaasen’s perspective of the ocean and ocean conservation, supported by details from Flush that show evidence of Hiaasen’s perspective. In Unit 3, students return to World without Fish and pursue further research about overfishing to write an informative consumer guide about buying fish to be put in a grocery store. This task addresses ELA standards W.6.2, W.6.6 (optional), W.6.7, L.6.2, L.6.2a, L.6.2b, L.6.3, L.6.3a, and L.6.3b. |
Vocabulary Development Resources from TeachersFirst
This collection of reviewed tools and resources from TeachersFirst promotes vocabulary development and skills for students to improve and master daily vocabulary, subject matter terms, speech/language vocabulary, and ESL/ELL language. Some are tools for study and practice of terms in any class--including world languages, while others focus specifically on written and spoken English. Be sure to check "In the classroom" for ways each resource might help in your teaching area. Share your favorites on a free TeachersFirst public page for students and parents to use for vocabulary practice outside of class.
Grades2 to 12
Important technical note: Lingro cannot "see" words included in Flash interactives such as the "What's New" rotating content on the TeachersFirst home page. If you RIGHT click on an area of text and see "About Adobe Flash Player...," this means that the text is displayed in Flash and not "legible" to Lingro. Often pages offer a non-Flash version as an alternative.
In the ClassroomWhen your ESL/ELL, learning support, or weaker readers do Internet research on sites above their independent reading level, have them open Lingro first and then enter the URL (web address) they wish to read. Use Lingro for vocabulary development in any subject. Mark this site as a favorite on your classroom computer or on your teacher web page so that ESL/ELL, world language students, or weaker readers can use the definition and translation feature and benefit from instantly-created word lists. If your school permits individual student accounts on web tools, this is a good one. If not, create a single teacher account to compile class word lists.
Grades1 to 6
In the ClassroomUse this site to share vocabulary by category, using pictures, audio, and written words with your ESL/ELL students, primary students, special ed students, or speech/language students. Include this link in a newsletter that goes home with ESL/ELL students. Mark it as a Favorite on your classroom computer. Demonstrate how to use this website on an interactive whiteboard or projector. Then have students work alone (or with a partner) at their current speaking level. This website could also be used in a regular education class with emerging readers. The five difficulty levels allow teachers the flexibility to differentiate the instruction. Note: small type fonts and some advertising may make this site difficult for some younger students to use. Preview and decide what your class can handle.
Grades6 to 12
In the ClassroomMark this site in Favorites on your classroom computers for ESL and ELL students.. Provide information about this site to foreign language teachers in your school. This is a wonderful site to list in your class newsletter (if applicable) or on your class website.
Grades3 to 8
In the ClassroomBe sure to include this link on your teacher web page and newsletter - so the students can easily access this site as an online dictionary and pronunciation reference.
Grades6 to 12
In the ClassroomShare these activities with individual students as an assignment or independent practice on your classroom computer and as a link from your web site. The reading and activities are easy to work on independently because of the listening feature and the available dictionary. Don't forget to provide headphones. Provide this link for the families of ESL/ELL students to read (or listen) to the stories together.
GradesK to 4
In the ClassroomUse this site with beginning readers, beginning writers, and ESL students to reinforce the skill you are teaching and to show connections between reading and writing. Make it available for your active writers to choose their own prompts, too, or for parents to use at home during breaks. Special ed teachers will appreciate these prompts as a way to promote language development. Use the pictures to record students' vocabulary on the lines below as they "tell you about the picture."
Grades9 to 12
In the ClassroomThis is a great way to get students interested in reading Marquez and also interested in reading biographies and autobiographies of great writers. Be sure to print out the poster and hang it where students can see it. Use your interactive whiteboard or projection screen to share this website with your students. Then, ask them to pick the quote from this short selection that means the most to them and have them explain why in a short writing exercise.
Grades1 to 8
In the ClassroomThis is a good site for ESL students who are more visual learners to practice concepts. Special Ed teachers may find some games helpful for vocabulary development and basic grammar, as well. Many of the drag and drops would work well on an interactive whiteboard or as a learning center on a single classroom computer.
Grades1 to 12
In the ClassroomUse this site as a pronunciation backup when you do not have a native speaker teaching foreign language. If you have access to a lab or individual laptops, assign students to practice pronunciation as they learn new vocabulary. Be sure to share the link from your teach web page in your world language class. As you study world cultures or geography, some students may want to learn simple language selections, as well. Gifted students --especially younger ones curious about languages -- will enjoy trying to learn independently. ESL students may also use this site to hear authentic pronunciation. Speech and language and special ed teachers working on vocabulary development will want to use this site with students, as well.
GradesK to 1
In the ClassroomSave this website to your favorites, and then use it as a learning center. Ask children to explain why things go where they have placed them. Speech and language teachers can use this activity as part of vocabulary development, as well.
GradesK to 5
In the ClassroomUse this for a center with vocabulary review activities in any primary classroom or with speech and language or special ed students for vocabulary development. Using it in ESL classes will also be great, even on an interactive whiteboard with a small group. Students can also use the games on their own to practice vocabulary outside of class, so be sure to include the link on your teacher web page.
Grades1 to 6
In the ClassroomPlan a kite day in the fall or spring and use all or part of these plans to learn new words, build kites, and even fly them before you write about them. This would be a terrific activity to include parents at school year's end.
Grades5 to 12
tag(s): greek (41)
In the ClassroomUse this site to reinforce and support vocabulary as you study Mythology. Share the word puzzles on an interactive whiteboard or projector. Have students create their own word activities from the same vocabulary list, such as matching or ranking challenges for their peers to try on the interactive whiteboard.
Grades6 to 12
tag(s): vocabulary (307)
In the ClassroomShare the puzzles on your interactive whiteboard or projector. Have students work with a partner to try out the puzzles on their own. Have students (or groups) create their own word puzzles to share as a class challenge as a student-run interactive whiteboard activity or share them on a class wiki.
GradesK to 4
In the ClassroomTry an interactive whiteboard for the first two activities. If you choose to make the final activity a class activity, project the questions onto a screen or whiteboard and challenge the students to answer the three questions independently. Used as a simple drag and drop, this site can help with vocabulary development for children with speech/language deficits.
Grades1 to 5
In the ClassroomThis site could be used in a variety of ways in English and other subjects. Ask the younger children to simply fill in 1-2 word phrases, while the older students can be challenged to write more complex statements. Speech and language teachers can use it for vocabulary development, as well.
Grades6 to 12
In the ClassroomInclude this link on your teacher web page or share it on the your interactive whiteboard or screen as students enter class each day. During your unit on word roots and affixes, challenge students to find the root of today's word. Invite students to create their own personal vocabulary journal using this and other vocabulary tools found from TeachersFirst's vocabulary development listings. Maybe even create a class vocabulary wiki with individual or student group pages and allow students to creatively define and illustrate their new-found words.
Grades9 to 12
tag(s): john knowles (4) |
Asthma is characterized by recurrent episodes of wheezing, shortness of breath, chest tightness, and coughing. Sputum may be produced from the lung by coughing but is often hard to bring up. During recovery from an attack, it may appear pus-like due to high levels of white blood cells called eosinophils. Symptoms are usually worse at night and in the early morning or in response to exercise or cold air. Some people with asthma rarely experience symptoms, usually in response to triggers, whereas others may have marked and persistent symptoms.
Asthma is caused by a combination of complex and incompletely understood environmental and genetic interactions. These factors influence both its severity and its responsiveness to treatment. It is believed that the recent increased rates of asthma are due to changing epigenetics (heritable factors other than those related to the DNA sequence) and a changing living environment. Onset before age 12 is more likely due to genetic influence, while onset after 12 is more likely due to environmental influence. |
Students will work in pairs to create sentences and to make a game for another pair to play. The pair in turn will "play" too. Students need to have a good understanding of the eight parts of speech: nouns, verbs, adjectives, adverbs, pronouns, prepositions, conjunctions, and interjections. If students are not familiar with the eight parts of speech, try one of the other lessons in this series or review the eight parts of speech from a grammar reference book.
Directions for Sentence Parts of Speech Sort
Materials needed: scissors, glue and three pieces of white paper for each team
Step 1 — Place students in pairs.
Step 2 — Explain the game. Students will work in pairs to make sentences and a key of all of the parts of speech used in the five sentences. Then, another pair of students will sort the five sentences into the eight parts of speech on another piece of paper.
Example Sorting of Eight Parts of Speech
Example of sentence: Wow, my big and hungry dog quickly ate his bowl of food in five seconds.
Example of sort:
- interjection — wow
- noun — dog, bowl, food, seconds
- adjective — big, hungry, five
- adverb — quickly
- verb — ate
- preposition — of, in
- pronouns — my, his
- conjunction — and
Step 3 — Direct students to create five sentences that are written neatly. Within the five sentences, they need to use at least two examples of each of the eight parts of speech. It is best if the students try to use most of the eight parts of speech in each sentence. The sentences need to be able to be cut apart by another team. Students should leave two lines before each sentence and two lines after each sentence for ease of cutting the sentences. It also helps to make large spaces between each word.
After the sentences are created, the students need to create a key of all of the parts of speech used. The teacher may allow students to use a grammar reference book to help students identify the parts of speech.
Students Work in Teams to Sort and Lable
Step 4 — Each team needs to make eight columns on a sheet of paper and label with the eight parts of speech.
Step 5 — Trade sentences with another team. Teams need to work quickly, but sorting the words correctly is more important.
Step 6 — When the students are done, tell the students to give their parts of speech sort to the team who created the sentences. The sort needs to be "graded." This can be for a grade or treats given to those teams who sorted the words correctly.
This is a fun way for students to learn to identify their eight parts of speech. They are learning while creating the sentences and sorting the sentences.
This post is part of the series: Parts of Speech Lessons
- Lesson Plan for the Eight Parts of Speech Game
- Eight Parts of Speech Sentence Sort
- Identifying Parts of Speech Review Game
- Identify Noun Case Activity
- Grammar Lesson: Action Verbs and Verbs of Being |
Hydrogen is the first element in the periodic table as its atomic number is 1.
Hydrogen has properties similar to that of Alkali metals and Halogens.
Hydrogen was discovered by Henry Cavendish. Hydrogen is prepared by displacement reaction of metals. In laboratory hydrogen is prepared by the action of sulphuric acid on zinc.
We'll learn more about the chemical properties of hydrogen : combustibility, action on litmus, action with oxygen, chlorine gas, metallic oxide ,uses and tests for hydrogen
A Smart Test on Study of First Element Hydrogen
Learnhive Lesson on Study of first Element Hydrogen |
Tonsil infections are a common childhood malady, bringing pain and discomfort to many children between the ages of five and 15. The result of inflamed tonsils, the condition – known as tonsillitis – is most often caused by a viral or bacterial infection.
While you are probably familiar with tonsillitis, did you know your child’s adenoids can also become infected? Adenoid infections typically only affect children; this is because the tissues begin to shrink around the age of 5 or 6, and disappear completely in most people by the time they reach their teens.
The tonsils are a pair of oval-shaped tissues in the back of the throat and the adenoids are a pair of soft tissue masses located behind the nose and roof of the mouth. Both work to protect the body from infection by trapping bacteria and germs, preventing them from entering the airways. In addition, they produce antibodies to fight infection. As the immune system’s first line of defense, the tonsils and adenoids come into frequent contact with germs, making them prone to infection themselves.
Viruses and bacteria, especially the Streptococcus bacterium (responsible for strep throat), are the most common causes of infection. Other causes include adenoviruses, influenza, Epstein-Barr virus, enteroviruses and herpes simplex virus.
Sore throat and swollen, inflamed tonsils that may appear red with a white or yellow coating are the most recognizable symptoms of tonsil infection. Other signs include blisters on the throat, swollen glands in the neck or jaw, difficulty swallowing, fever, headache, chills, fatigue, ear pain and bad breath.
Enlarged adenoids can block airflow through your child’s nose, which can lead to mouth breathing, snoring and a dry and sore throat. Yellow or green discharge from the nose can also occur. In addition to swollen adenoids, infected adenoids can lead to middle ear infections, sinusitis and a chest infection.
Diagnosing a tonsil and adenoid infection requires a physical examination and an in-depth exam of the throat and ears with an otoscope. Your child is likely to be given a throat swab to test for the presence of strep, as well.
In the past, the treatment method of choice was surgical removal of the tonsils and adenoids. Known as a tonsillectomy and an adenoidectomy (T&A if performed at the same time), these procedures are now reserved for chronic cases that do not respond to other forms of medical treatment.
Instead, home remedies are usually recommended for infections caused by a virus. Your child should get plenty of rest and stay hydrated with fluids. Warm broth or tea, and cold Popsicles, are particularly effective at soothing pain and discomfort. Pain and fever can be controlled with over-the-counter medications like ibuprofen and acetaminophen (but avoid aspirin, which can be harmful in children). Throat lozenges or cough drops can be given to children over the age of four.
Contact Sacramento Ear, Nose & Throat for more information or to schedule an appointment. |
Wildfires in Oregon
- Oregon Smoke Information Blog
Get current local air quality information from Department of Environmental Quality (DEQ) and learn if there is a health advisory in your community.
Health Threats from Wildfire Smoke
Smoke from wildfires is a mixture of gases and fine particles from burning trees and other plant materials. Smoke can hurt your eyes, irritate your respiratory system, and worsen chronic heart and lung diseases.
Know if you are at risk
- If you have heart or lung disease, such as congestive heart failure, angina, COPD, emphysema or asthma, you are at higher risk of having health problems from smoke.
- Older adults are more likely to be affected by smoke, possibly because they are more likely to have heart or lung diseases than younger people.
- Children are more likely to be affected by health threats from smoke because their airways are still developing and because they breathe more air per pound of body weight than adults. Children also are more likely to be active outdoors.
Recommendations for people with chronic diseases
- Have an adequate supply of medication (more than five days).
- If you have asthma, make sure you have a written asthma management plan.
- If you have heart disease, check with your health care providers about precautions to take during smoke events.
- If you plan to use a portable air cleaner, select a high efficiency particulate air (HEPA) filter or an electro-static precipitator (ESP). Buy one that matches the room size specified by the manufacturer.
- Call your health care provider if your condition gets worse when you are exposed to smoke.
Recommendations for everyone: Limit your exposure to smoke
- Pay attention to local air quality reports.
Listen and watch for news or health warnings about smoke. Find out if your community provides reports about the Environmental Protection Agency's Air Quality Index (AQI). Also pay attention to public health messages about taking additional safety measures.
- Refer to visibility guides if they are available.
Not every community has a monitor that measures the amount of particles that are in the air. In the Western part of the United States, some communities have guidelines to help people estimate the Air Quality Index (AQI) based on how far they can see.
- If you are advised to stay indoors, keep indoor air as clean as possible.
Keep windows and doors closed unless it is extremely hot outside. Run an air conditioner if you have one, but keep the fresh air intake closed and the filter clean to prevent outdoor smoke from getting inside. Running a high efficiency particulate air (HEPA) filter or an electro-static precipitator (ESP) can also help you keep your indoor air clean. If you do not have an air conditioner and it is too warm to stay inside with the windows closed, seek shelter elsewhere.
- Do not add to indoor pollution.
When smoke levels are high, do not use anything that burns, such as candles, fireplaces, or gas stoves. Do not vacuum, because vacuuming stirs up particles already inside your home. Do not smoke, because smoking puts even more pollution into the air.
- Do not rely on masks for protection.
Paper "comfort" or "dust" masks commonly found at hardware stores are designed to trap large particles, such as sawdust. These masks will not protect your lungs from smoke. There are also specially designed air filters worn on the face called respirators. These must be fitted, tested and properly worn to protect against wildfire smoke. People who do not properly wear their respirator may gain a false sense of security. If you choose to wear a respirator, select an “N95” respirator, and make sure you find someone who has been trained to help you select the right size, test the seal and teach you how to use it. It may offer some protection if used correctly. For more information about effective masks, see the Respirator Fact Sheet provided by CDC’s National Institute for Occupational Safety and Health.
For the public
Is your air quality hazardous to your health?
Fact Sheet: Hazy, smoky air: Do you know what to do?
Frequently Asked Questions: Wildfire Smoke and Your Health
Public health guidance for school outdoor activities during wildfire events
For more information, schools should contact their local health department.
Please contact Oregon OSHA for employer resources.
For pregnant women and infants
Information for pregnant women and parents of young infants
For public health, health care and providers
See Also: Clean Air at Home |
Learn more about one of our history levels:
With Curiosity Chronicles your child will:
Learn to discuss complex ideas
Our books are written as a back-and-forth dialogue between our main characters, Ted and Mona. The dialogue format allows us to present alternative views and competing historical facts. It allows us to discuss the role of often underrepresented people who are ignored by narrative-heavy history. The dialogue also allows Ted and Mona to break down complex ideas piece by piece. This not only makes complex ideas accessible but it also models for students how to discuss complex ideas.
Develop critical thinking skills
Students will learn not only what happened in history but also why those events happened. Students will learn the building blocks of human societies and how one change often leads to another. With Ted and Mona as their guides, students will learn to analyze events and apply those skills to new situations.
Engage in hands-on learning
The learning doesn't stop when the chapter is over. Our activity guides include map work, timelines, supplemental reading, art projects, science projects, research projects, recipes, Minecraft building prompts, review activities, and more to ensure your student will be able to expand and build upon their learning in a meaningful way. We also have lapbooks and interactive notebooks for even more expanded learning.
Find tangential learning
Our chapters often open the door to tangential and cross-subject learning. Is your child learning about place value in math? Our chapter about the Maya in Snapshots of Ancient History explores exactly what place value is and why it's so important. Learning about germs in science class? Our chapter on the Islamic Golden Age in Snapshots of Medieval History discusses the origins of germ theory and why it mattered. Looking for even more learning? Each chapter is followed by a "Want to know more?" box with suggested topics for further study.
Jessica, The Waldock Way
I grew up hating history and was not looking forward to teaching it in our homeschool. But, Curiosity Chronicles has changed that. The dialogue format makes it fun for me and my daughter to read it together and really bring history to life. My daughter is obsessed with the Minecraft aspect of it too and looks forward to each chapter because of it. Thank you for such an in depth well thought out program.
Elizabeth, Homeschool Mom
My kids love the Minecraft building challenge for each chapter—they often request history because they knew they will get some game time at the end.
Arlene, Homeschooler By Design
When you have kiddos with special needs it’s can be difficult to find a program that addresses their various learning styles...Curiosity Chronicles does just that.
The multi-sensory approach takes vastly different avenues and includes activities that explores history through an expansive worldview; truly connecting them with an enriching history education. They are not only able to see, hear, and learn from the deeper focus of innovation and discovery...but also, the author is conscious of the kids of today’s main interest and meets them where they are at. For example, every chapter includes Minecraft challenges! Such combinations are truly unheard of and it has become a center point of our homeschool days and rhythm.
Rebecca, Homeschool Mom
Wow! It's incredible. I looked for ages for exactly this kind of history curriculum. I cannot say enough wonderful things about it.
Eileen, Homeschool Mom
I really, really love how this curriculum focuses on the achievements and competence of past civilizations... It is diverse, covering civilizations from all around the globe... The information is interesting and the depth and content age-appropriate...By far the best elementary history curriculum available at the moment!
Sarah, Homeschool Mom
I didn't know what to expect when buying this curriculum and my son is LOVING it. I have a gifted 9 year old who gets bored easily if something is too easy but also doesn't like dry. This is challenging and engaging. |
Sometime in the 2030s, the not-too-distant future, NASA hopes to send the first manned mission to Mars. That means the space agency has about two decades to solve the many, many problems with a hypothetical mission.
Back in 2013, Wired listed the challenges we’ll have to overcome to bring people to the Red Planet:
“… we can’t properly store the necessary fuel long enough for a Mars trip, we don’t yet have a vehicle capable of landing people on the Martian surface, and we aren’t entirely sure what it will take to keep them alive once there.”
So in looking for solutions, NASA is casting a wide net—including reaching out to citizen scientists. To that end, the agency invited the public to participate in a challenge to help solve one problem: how to protect astronauts in deep space.
In November of last year, NASA and crowdsourcing company Innocentive announced the challenge and the promise of a cash prize. The agency laid out the problem:
“Galactic Cosmic Rays (GCR) permeate the universe and exposure to them is inevitable during space exploration… long duration exposure to GCR will exceed allowable career radiation exposure limits during any meaningful deep space mission.”
And the necessary solution:
“Protection is needed to allow for safe and successful long duration human missions such as going to Mars. NASA is seeking to identify key solutions that will protect astronauts from GCR, specifically a way to reduce exposure.”
Contestants had just over a month to submit ideas for consideration, and $5,000 in prize money was at stake.
In April, the winner was announced: Experimental nuclear physicist Dr. George Hitt, an assistant professor of Physics and Nuclear Engineering at Khalifa University in the United Arab Emirates. In an email to Fusion, Hitt described his $5,000 idea as a play on traditional methods of shielding astronauts from the harmful rays.
According to Hitt, “The best known method of [radiation] protection is to surround travellers with several meters of material that slows or absorbs the radiation.” In essence, Hitt wants to protect space travelers from radiation by building an interplanetary transit system:
“The shield and its orbit function like a city bus. At regular intervals, it will come close enough to Earth for a spacecraft to fly up to it and hitch a ride inside of it. After riding to Mars orbit, it would detach and leave the shield material behind, to return back to Earth. When the travellers want to return to Earth, they would again wait for the right time, rendezvous with the shield and ride inside of it back to Earth's orbit.”
Hitt says it didn’t take him very long to come up with the solution. “Actually, the idea for separating shield and ship came to me while playing LEGOS with my children one Saturday afternoon. I wrote and submitted the solution by the same evening!”
Not too shabby for an after-playtime activity. Still the winning ideas didn’t solve the ultimate problem—but they do bring us closer to finding a solution. Said NASA in a statement.
“While the five winners selected in the first challenge did not identify a solution that ultimately solves the problem of GCR risk to human crews, the first place idea did provide a novel approach to using and configuring known methods of protection to save substantial launch mass and lower launch costs over multiple missions.”
NASA’s Kerry Lee, the challenge’s technical lead, confirmed the sentiment in an email to Fusion: “We didn’t receive any breakthrough solutions that brought us to an 'ah ha' moment.” Of 136 submissions, he added, the ideas ranged "from absolutely silly and some absurd with no physical possibility of working all the way to well thought out concepts of operation in dealing with the radiation problem.”
Markus Novak, who tied for fourth place and walked away from the challenge with $1,000, based his solution on basic science. Novak, an Electrical Engineering PhD student at the Ohio State University, told Fusion in an email that “cosmic radiation wasn't something I had ever given much thought to before. But it ultimately boiled down to high school physics.”
Novak proposed a safe space for astronauts to travel through: “In a nutshell, I had found some previous NASA research that used magnets to deflect the rays—but the thing is, cosmic rays are coming in with absurd amounts of energy… so it takes quite a lot of power to turn these things back. So instead, I developed a lens, which would alter the trajectory just enough to miss the spacecraft.”
Now, the challenge doors are again open, and the stakes are higher. “The follow-on challenge offers an award of up to $30,000 for design ideas to protect the crew on long-duration space missions,” said NASA. The agency is accepting applications through June 29.
Danielle Wiener-Bronner is a news reporter. |
Poland and reform in Eastern Europe
- Counrties in EE repsonded to the secret speech by pushing for reform.
Unrest in Poland 1956
- Following the speech the people of Poland began to challenge communist rule.
- The first serious uprising took place at Poznan and focused on:
- Food shortages
- Lack of consumer goods
- Poor housing
- Khrushchev, alarmed at the situation and under pressure from hardliners, led a delegation to Warsaw to reassert control.
- It had to deal with Wladyslaw Gomulka - Poland's communist leader and a very shrewd politician.
- He made it clear to the Soviet delegation that the people of Poland were demanding reform however he emphasised that Poland's reform would not affect their relationship with the USSR.
- He had no intention of:
- abandoning communism
- leaving the Warsaw Pact
- Radio Free Europe broadcast what had been achieved in Poland and Gomulka became a hero to the student community in Budapest which began to demand reform in Hungary.
Events in Hungary 1956
- Since the end of WW2 Hungary had been ruled by a hard-line Stalinist - Rakosi.
- The new, moderate…
Similar History resources:
The post-Stalin thaw in superpower relations between 1953 and 1962 and Khrushchev’s policy of Peaceful Coexistence |
Syncope is the medical term for fainting. It happens when your brain doesn't get enough blood flow and you lose consciousness. Usually a slow heart rate causes a drop in blood pressure, which reduces blood flow to the brain. In most cases, you recover within seconds or minutes. A small number of people, mostly the elderly, have episodes of fainting.
If you have slurred speech, or have trouble moving an arm or a leg after fainting, call for emergency help right away. This may be a sign of stroke.
Signs and Symptoms
Signs and symptoms that you might have before you faint include:
- Feeling warm
- Blurred vision
- Heaviness in your legs
- Nausea and sometimes vomiting
In addition to losing consciousness when you faint, you may also:
- Turn very pale
- Fall down or slump
- Have spasmodic jerks of your body
- Have a weak pulse
- Experience a drop in your blood pressure
What Causes It?
Fainting often happens due to a simple, non medical cause, such as:
- Standing up for long periods of time
- Feeling emotionally distressed
- Seeing something upsetting or disturbing, such as at the sight of blood
Rarely, it may be the result of a health condition, such as:
- Heart disease (decreased blood flow to the heart or irregular heart rhythm)
- Low blood sugar
- Panic attack
- Problems regulating blood pressure
- Severe blood loss
Who Is Most At Risk?
Certain conditions or characteristics may put you at risk for fainting, such as:
- Being over 65 years of age
- Having heart disease, diabetes, or high blood pressure
- Using recreational drugs
- Taking certain medications, such as blood pressure medication, insulin, oral diabetes medications, diuretics (water pills), medications to control heart rhythm, or blood thinners
What to Expect at Your Doctor's Office
You should see your doctor after fainting. Your doctor will:
- Ask questions about what you were doing before you fainted.
- Ask how you felt afterward.
- Do a physical exam.
- Conduct other tests, such as blood tests and electrocardiogram (ECG).
- Conduct imaging of the brain, such as magnetic resonance imaging (MRI).
- Focus on medications you take.
- Consider any pre-existing medical conditions you might have.
- Compare your most recent fainting spell with similar episodes you had in the past.
This will help your doctor pinpoint why you fainted and rule out certain health conditions. If seizures are suspected, your doctor may also do a test called an electroencephalogram (EEG).
To avoid fainting.
- Avoid fatigue, hunger, and stress. DO NOT skip meals.
- Drink plenty of fluids.
- Avoid changing positions quickly, especially when you get up from a sitting or lying down position.
- Sleep with the foot of your bed raised.
- DO NOT stand for long periods of time.
- Wear elastic stockings if needed to keep blood from pooling in your legs, which may reduce blood flow to the brain.
- Diuretics and other medicines (both prescription and non-prescription) can contribute to the problem. So check with your doctor.
- Avoid wearing anything tight around your neck.
- Turn your whole body, not just your head, when looking around.
- To prevent injuries, cover floors with thick carpeting, and avoid driving or using mechanical equipment.
- Avoid caffeine and alcohol.
If you feel like you are going to faint, lie down and raise your legs to keep blood flowing to your brain. If you can't lie down, sit down and put your head between your knees, stand with your legs crossed and thighs pressed together. This can also help keep blood from pooling in your legs.
Any serious underlying health condition should be treated. When a person faints:
- Raise the legs to help increase blood flow to the brain.
- Loosen any tight clothing.
- Apply cold water to the person's face.
- Turn the person's head to the side to prevent vomiting or choking.
A pregnant woman should lie on her left side to relieve pressure on the heart.
When an irregular heartbeat causes fainting, your doctor may prescribe medications such as beta-blockers or antiarrhythmics. Your doctor may also prescribe steroids (such as fludrocortisone) or salt tablets to help you control the amount of sodium and fluids in your body.
Surgical and Other Procedures
If fainting is caused by a heart condition, such as a slow or rapid heartbeat, you may need a pacemaker.
Complementary and Alternative Therapies
Although there are no specific treatments for fainting, a number of alternative therapies can help protect the heart and blood vessels. Fainting may be caused by a serious underlying health condition. So check with your doctor before taking any herbs or supplements. Always tell your doctor about the herbs and supplements you are using or considering using.
You may have warning signs before fainting. Hypnosis, deep breathing, relaxation techniques, and biofeedback may help you avoid fainting. These techniques may also help you control fainting related to regulation of your blood pressure.Nutrition and Supplements
To stay healthy and avoid fainting:
- DO NOT skip meals. Eat a healthy diet, with plenty of fruits and vegetables, whole grains, healthy protein, and good fats.
- Avoid caffeine, alcohol, and tobacco.
- Drink plenty of fluids.
These supplements may promote heart health:
- Omega-3 fatty acids. Such as fish oil -- may help reduce inflammation and improve heart health. Cold-water fish, such as salmon or halibut, are good sources. Omega-3 fatty acids may increase the risk for bleeding, especially if you also take blood thinners, such as warfarin (Coumadin), clopidogrel (Plavix), or aspirin.
- Coenzyme Q10 (C0Q10). An antioxidant that may be good for heart health. DO NOT take CoQ10 if you take blood thinners, such as warfarin (Coumadin), clopidogrel (Plavix), or aspirin. CoQ10 can make these drugs less effective. So they might not work as well.
- Alpha-lipoic acid. An antioxidant that may be good for heart health. People who take thyroid hormone should ask their doctors before taking alpha-lipoic acid. People who have low levels of thiamine should not take alpha-lipoic acid.
- L-arginine. An antioxidant that may help promote good circulation. Be sure to ask your doctor before taking L-arginine because it may interfere with other treatments and may not be right for you. People who have a history of a heart attack, heart disease, low blood pressure, or circulatory issues should speak to their doctors before taking L-arginine. People who take medication for circulation, including medications for erectile dysfunction, should also take caution when taking L-arginine. It can also cause problems with blood pressure, as well as make herpes infections worse. Some people may be allergic to L-arginine.
The use of herbs is a time-honored approach to strengthening the body and treating disease. However, herbs can trigger side effects and interact with other herbs, supplements, or medications. For these reasons, take herbs with care and under the supervision of a health care provider.
- Green tea (Camelia sinensis). An antioxidant and anti-inflammatory that may be good for heart health. Use caffeine-free products. You may also make teas from the leaf of this herb.
- Bilberry ((Vaccinium myrtillus). An antioxidant that helps promote good circulation. Bilberry may increase the risk for bleeding, especially if you also take blood thinners, such as warfarin (Coumadin), clopidogrel (Plavix), or aspirin. People with low blood pressure, heart disease, diabetes, or blood clots should not take bilberry without first talking to their doctors. DO NOT take bilberry if you are pregnant or breastfeeding.
- Ginkgo (Ginkgo biloba). An antioxidant that may be good for heart health. Ginkgo interacts with many medications, including blood thinners, such as warfarin (Coumadin) and clopidogrel (Plavix). People with diabetes, fertility problems, a history of seizures, and bleeding disorders may not be able to take ginkgo. Because of the potential for many interactions, DO NOT take ginkgo without your doctor's supervision.
Sometimes, fainting may be due to drops in a hormone called cortisol. Ask your doctor about testing for low cortisol. Some doctors may prescribe cortisol hormone supplements or use nutrients and herbs to get cortisol levels back to normal.Homeopathy
Before prescribing a remedy, homeopaths take into account a person's constitutional type, includes your physical, emotional, and intellectual makeup. An experienced and certified homeopath will assess your individual constitution and symptoms, and then recommend remedies. Below are common remedies used for fainting or pre-fainting symptoms:
- Carbo vegetabilis. Used for fainting or lightheadedness after rising in the morning, from loss of fluids, or from becoming overheated.
- Opium. Used for fainting due to excitement or fright.
- Sepia. Used for fainting following prolonged standing, exercise, or fluid loss due to fever.
Acupuncture may help treat fainting. A clinical analysis of 102 serious cases of loss of consciousness reported that acupuncture helped in a large number of these cases.
Acupuncture does not often cause side effects or complications. Some people may faint during acupuncture treatments, although it is not considered a serious complication.
In most people, fainting is not a sign of a life-threatening disease, particularly if it only happens once. The elderly have a higher risk for injury after a fainting episode, especially from fractures. People who faint due to heart disease tend to have a poorer prognosis than those who have heart disease without fainting.
Many people who faint, especially the elderly and those who have heart disease, may be hospitalized to look for a cause. Continuous ECG monitoring can help spot an irregular heartbeat as a cause of fainting, especially in people who faint more than once.
Ahlemeyer B, Krieglstein J. Neuroprotective effects of Ginkgo biloba extract. Cell Mol Life Sci. 2003;60(9):1779-1792.
Alboni P, Dinelli M, Gianfranchi L, Pacchioni F. Current treatment of recurrent vasovagal syncope: between evidence-based therapy and common sense. J Cardiovasc Med (Hagerstown). 2007 Oct;8(10):835-839.
Basu HN, Liepa GU. Arginine: a clinical perspective. Nutr Clin Pract. 2002;17(4):218-225.
Bast A, Haenen GR. Lipoic acid: a multifunctional antioxidant. Biofactors. 2003;17(1-4):207-213.
Beers MH, Porter RS, et al. The Merck Manual of Diagnosis and Therapy. 18th ed. Whitehouse Station, NJ: Merck Research Laboratories; 2006:584-588.
Bell DR, Gochenaur K. Direct vasoactive and vasoprotective properties of anthocyanin-rich extracts. J Appl Physiol. 2006;100(4):1164-1170.
Cabrera C, Artacho R, Gimenez R. Beneficial effects of green tea -- a review. J Am Coll Nutr. 2006;25(2):79-99.
Carillo-Vico A, Reiter RJ, Lardone PJ, et al., The modulatory role of melatonin on immune responsiveness. Curr Opin Investig Drugs. 2006;7(5):423-431.
Ferri FF, ed. Ferri's Practical Guide: Fast Facts for Patient Care. 9th ed. Philadelphia, PA: Elsevier Mosby; 2014.
Fontani G, Corradeschi F, Felici A, et al. Cognitive and physiological effects of Omega-3 polyunsaturated fatty acid supplementation in healthy subjects. Eur J Clin Invest. 2005;35(11):691-699.
Graf D, Schlaepfer J, Gollut E, et al. Predictive models of syncope causes in an outpatient clinic. Int J Cardiol. 2008;123(3):249-256.
Grubb BP, Karabin B. Syncope: evaluation and management in the geriatric patient. Clin Geriatr Med. 2012;28(4):717-728.
Khera S, Palaniswamy C, Aronow WS, et al. Predictors of mortality, rehospitalization for syncope, and cardiac syncope in 352 consecutive elderly patients with syncope. J Am Med Dir Assoc. 2013;14(5):326-330.
Kimura K, Ozeki M, Juneja LR, Ohira H. L-Theanine reduces psychological and physiological stress responses. Biol Psychol. 2007;74(1):39-45.
Mehlsen J, Mehlsen AB. Diagnosis and treatment of syncope. Ugeskr Laeger. 2008;170(9):718-723. Review.
No authors listed. L-theanine. Monograph. Altern Med Rev. 2005;10(2):136-138.
Ntusi NA, Coccia CB, Cupido BJ, Chin A. An approach to the clinical assessment and management of syncope in adults. S. Afr Med J. 2015;105(8):690-693.
Numberoso F, Mossini G, Lippi G, Cervellin G. Evaluation of the current prognostic role of cardiogenic syncope. Intern Emerg Med. 2013;8(1):69-73.
Ortega RM, Palencia A, Lopez-Sobaler AM. Improvement of cholesterol levels and reduction of cardiovascular risk via the consumption of phytosterols. Br J Nutr. 2006;96 Suppl 1:S89-S93.
Pandi-Perumal SR, Srinivasan V, Maestroni GJ, et al., Melatonin. FEBS J. 2006;273(13):2813-2838.
Ruwald MH, Hansen ML, Lamberts M, et al. Prognosis among healthy individuals discharged with a primary diagnosis of syncope. J Am Coll Cardiol. 2013;61(3):325-332.
Ruwald MH, Hansen ML, Lamberts M, et al. The relation between age, sex, comorbidity, and pharmacotherapy and the risk of syncope: a Danish nationwide study. Europace. 2012;14(10):1506-1514.
Ryan DJ, Harbison JA, Meaney JF, et al. Syncope causes transient focal neurological symptoms. QJM. 2015;108(9):711-718.
Simopoulos AP. Omega-3 fatty acids in inflammation and autoimmune diseases. J Am Coll Nutr. 2002;21(6):495-505.
Skibska B, Jozefowicz-Okonkwo G, Goraca A. Protective effects of early administration of alpha-lipoic acid against lipopolysaccharide-induced plasma lipid peroxidation. Pharmacol Rep. 2006;58(3):399-404.
Yeh GY, Davis RB, Phillips RS. Use of complementary therapies in patients with cardiovascular disease. Am J Cardiol. 2006;98(5):673-680.
Yoon JH, Baek SJ. Molecular targets of dietary polyphenols with anti-inflammatory properties. Yonsei Med J. 2005;46(5):585-596.
Review Date: 11/19/2016
Reviewed By: Steven D. Ehrlich, NMD, Solutions Acupuncture, a private practice specializing in complementary and alternative medicine, Phoenix, AZ. Review provided by VeriMed Healthcare Network. Also reviewed by the A.D.A.M Editorial team. |
1. EQUATION: The statement of equality is called an equation.
2. Linear equation in one variable: An equation of the form ax+b=0,where a,b are real numbers,(a≠0) is called a linear equation in one variable.
3. Linear equation in two variables: An equation of the form ax+by+c=0, where a,b,c are real numbers(a≠0,b≠0)is called a linear equation in two variables x and y.
4. Consistent system of linear equations: A system of two linear equations in two unknowns is said to be consistent if it has at least one solution.
5.Inconsistent system of linear equations: if a system has no solution, then it is called inconsistent.
The system of a pair of linear equations a1 x+b1 y+c1 =0 a2 x +b2y+c2 =0
(i) has no solution. If a1/a2 = b1/b2 ≠ c1/c2
(ii) has an infinite number of solutions If a1/a2 = b1/b2 = c1/c2
(iii) has exactly one solution. If a1/a2 ≠ b1/b2
6. Algebraic methods: (i)Method of substitution (ii)Method of elimination by addition or subtraction
(iii) Method of cross multiplication for: a1 x+b1 y+c1 =0 a2 x +b2y+c2 =0
X= (b1c2 – b2c1) /(a1b2 – a2b1) , y = (c1a2 – c2a1) / a1b2 - a2b1 |
While confined to a wheel chair for most of his life, Stephen Hawking revolutionized modern astrophysics by uncovering the mysteries of black holes – areas of space with such gravitational strength that not even light can escape them.
In his early twenties, Hawking contracted ALS (or Lou Gehrig's disease) – an illness which would slowly paralyze him over the course of his life. Despite his condition, he rose to prominence in Cambridge as a leading theoretical physicist. Inspired by the work of Roger Penrose, he began doing research on black holes and their relationship to the structure of the universe. He discovered that black holes emit radiation, now called Hawking Radiation, which causes them to gradually shrink, countering previous theories that they remain fixed.
In 1988, Hawking published the classic "A Brief History of Time: From the Big Bang to Black Holes," in which he outlined the history of cosmology and attempted to devise a unified theory of the universe, connecting General relativity and Quantum mechanics. The work sold over 10 million copies and became a cornerstone of modern physics and cosmology.
Stephen Hawking died on March 14th 2018 – the 139th anniversary of Albert Einstein’s birth.
- Stephen Hawking, Wikipedia. |
Aerosol spray is a type of dispensing system which creates an aerosol mist of liquid particles. It is used with a can or bottle that contains a payload and propellant under pressure. When the container's valve is opened, the payload is forced out of a small hole and emerges as an aerosol or mist. As propellant expands to drive out the payload, only some propellant evaporates inside the can to maintain a constant pressure. Outside the can, the droplets of propellant evaporate rapidly, leaving the payload suspended as very fine particles or droplets.
The concepts of aerosol probably go as far back as 1790. The first aerosol spray can patent was granted in Oslo in 1927 to Erik Rotheim, a Norwegian chemical engineer, and a United States patent was granted for the invention in 1931. The patent rights were sold to a United States company for 100,000 Norwegian kroner. The Norwegian Postal Service, Posten Norge, celebrated the invention by issuing a stamp in 1998.
In 1939, American Julian S. Kahn received a patent for a disposable spray can, but the product remained largely undeveloped. Kahn's idea was to mix cream and a propellant from two sources to make whipped cream at home—not a true aerosol in that sense. Moreover, in 1949, he disclaimed his first four claims, which were the foundation of his following patent claims.
It was not until 1941 that the aerosol spray can was first put to good use by Americans Lyle Goodhue and William Sullivan of the United States Bureau of Entomology and Plant Quarantine, who are credited as the inventors of the modern spray can. Their design of a refillable spray can dubbed the aerosol bomb or bug bomb, is the ancestor of many popular commercial spray products. It was a hand-sized steel can charged with a liquefied gas under 75 pounds of pressure and a product to be expelled as a mist or a foam. A public-service patent was issued on the invention and assigned to the Secretary of Agriculture for the free use of the people of the United States. Pressurized by liquefied gas, which gave it propellant qualities, the small, portable can enabled soldiers to defend against malaria-carrying mosquitoes by spraying inside tents and airplanes in the Pacific during World War II. Goodhue and Sullivan received the first Erik Rotheim Gold Medal from the Federation of European Aerosol Associations on August 28, 1970, in Oslo, Norway in recognition of their early patents and subsequent pioneering work with aerosols.
In 1948, three companies were granted licenses by the United States government to manufacture aerosols. Two of the three companies, Chase Products Company and Claire Manufacturing, still manufacture aerosols to this day. The "crimp-on valve", used to control the spray in low-pressure aerosols was developed in 1949 by Bronx machine shop proprietor Robert H. Abplanalp.
In 1974, Drs. Frank Sherwood Rowland and Mario J. Molina proposed that chlorofluorocarbons, used as propellants in aerosol sprays, contributed to the depletion of Earth's ozone layer. In response to this theory, the U.S. Congress passed amendments to the Clean Air Act in 1977 authorizing the Environmental Protection Agency to regulate the presence of CFCs in the atmosphere. The United Nations Environment Programme called for ozone layer research that same year, and, in 1981, authorized a global framework convention on ozone layer protection. In 1985, Joe Farman, Brian G. Gardiner, and Jon Shanklin published the first scientific paper detailing the hole in the ozone layer. That same year, the Vienna Convention was signed in response to the UN's authorization. Two years later, the Montreal Protocol, which regulated the production of CFCs was formally signed. It came into effect in 1989. The U.S. formally phased out CFCs in 1995.
If aerosol cans were simply filled with compressed gas, it would either need to be at a dangerously high pressure and require special pressure vessel design (like in gas cylinders), or the amount of payload in the can would be small, and rapidly deplete. Usually the gas is the vapor of a liquid with boiling point slightly lower than room temperature. This means that inside the pressurized can, the vapor can exist in equilibrium with its bulk liquid at a pressure that is higher than atmospheric pressure (and able to expel the payload), but not dangerously high. As gas escapes, it is immediately replaced by evaporating liquid. Since the propellant exists in liquid form in the can, it should be miscible with the payload or dissolved in the payload. In gas dusters and freeze sprays, the propellant itself acts as the payload. The propellant in a gas duster can is not "compressed air" as sometimes assumed, but usually a haloalkane.
Chlorofluorocarbons (CFCs) were once often used as propellants, but since the Montreal Protocol came into force in 1989, they have been replaced in nearly every country due to the negative effects CFCs have on Earth's ozone layer. The most common replacements of CFCs are mixtures of volatile hydrocarbons, typically propane, n-butane and isobutane. Dimethyl ether (DME) and methyl ethyl ether are also used. All these have the disadvantage of being flammable. Nitrous oxide and carbon dioxide are also used as propellants to deliver foodstuffs (for example, whipped cream and cooking spray). Medicinal aerosols such as asthma inhalers use hydrofluoroalkanes (HFA): either HFA 134a (1,1,1,2,-tetrafluoroethane) or HFA 227 (1,1,1,2,3,3,3-heptafluoropropane) or combinations of the two. More recently, liquid Hydrofluoroolefin (HFO) propellants have become more widely adopted in aerosol systems due to their relatively low vapor pressure, low global warming potential (GWP), and non flammability. Manual pump sprays can be used as an alternative to a stored propellant.
Liquid aerosol propellant filling machines require additional precautions, such as being mounted externally to the production warehouse in a gas house. Liquid aerosol propellant machines are typically constructed to comply with ATEX Zone II/2G regulations (classification Zone 1).
Modern aerosol spray products have three major parts: the can, the valve and the actuator or button. The can is most commonly lacquered tinplate (steel with a layer of tin) and may be made of two or three pieces of metal crimped together. Aluminium cans are also common and are generally used for more expensive products or products intended to have a more premium appearance, such as personal care products. The valve is crimped to the inside rim of the can, and the design of this component is important in determining the spray rate. The actuator is depressed by the user to open the valve; a spring closes the valve again when it is released. The shape and size of the nozzle in the actuator controls the aerosolized particle size and the spread of the aerosol spray.
Non-propellant packaging alternatives
Packaging that uses a piston barrier system by CCL Industries or EarthSafe by Crown Holdings is often selected for highly viscous products such as post-foaming hair gels, thick creams and lotions, food spreads and industrial products and sealants. The main benefit of this system is that it eliminates gas permeation and assures separation of the product from the propellant, maintaining the purity and integrity of the formulation throughout its consumer lifespan. The piston barrier system also provides a consistent flow rate with minimal product retention.
Another type of dispensing system is the bag-in-can (or BOV, bag-on-valve technology) system where the product is separated from the pressurizing agent with a hermetically sealed, multi-layered laminated pouch, which maintains complete formulation integrity so only pure product is dispensed. Among its many benefits, the bag-in-can system extends a product's shelf life, is suitable for all-attitude, (360-degree) dispensing, quiet and non-chilling discharge. This bag-in-can system is used in the packaging of pharmaceutical, industrial, household, pet care and other products that require complete separation between the product and the propellant.
A new development is the 2K (two component) aerosol. A 2K aerosol device has main component stored in main chamber and a second component stored in an accessory container. When applicator activates the 2K aerosol by breaking the accessory container, the two components mix. The 2K aerosol can has the advantage for delivery of reactive mixtures. For example, 2K reactive mixture can use low molecular weight monomer, oligomer, and functionalized low molecular polymer to make final cross-linked high molecular weight polymer. 2K aerosol can increase solid contents and deliver high performance polymer products, such as curable paints, foams, and adhesives.
There are three main areas of health concern linked to aerosol cans:
- Aerosol contents may be deliberately inhaled to achieve intoxication from the propellant (known as inhalant abuse or "huffing"). Calling them "canned air" or "cans of compressed air" could mislead the ignorant to think they are harmless. In fact, death has resulted from such misuse.
- Aerosol burn injuries can be caused by the spraying of aerosol directly onto the skin, in a practice sometimes called "frosting". Adiabatic expansion causes the aerosol contents to cool rapidly on exiting the can.
- The propellants in aerosol cans are typically combinations of ignitable gases and have been known to cause fires and explosions. However, non-flammable compressed gases such as nitrogen and nitrous oxide have been widely adopted into a number of aerosol systems (such as air fresheners and aerosolized whipped cream) as have non-flammable liquid propellants.
- Bellis, Mary The History of Aerosol Spray Cans
- Norwegian Patent No. 46613, issued on November 23, 1926
- U.S. Patent 1,800,156 — Method and Means for the Atomizing or Distribution of Liquid or Semiliquid Materials, issued April 7, 1931
- Kvilesjø, Svend Ole (17 February 2003). "Sprayboksens far er norsk". Aftenposten (in Norwegian). Archived from the original on 30 June 2008. Retrieved 6 February 2009.
- U.S. Patent 2,170,531 — Appratus for Mixing a Liquid With a Gas, granted August 22, 1939.
- Carlisle, Rodney (2004). Scientific American Inventions and Discoveries, p.402. John Wiley & Songs, Inc., New Jersey. ISBN 0-471-24410-4.
- U.S. Patent 2,331,117, filed October 3, 1941, and granted October 5, 1943. Patent No. 2,331,117 (Serial No. 413,474) for an aerosol “dispensing apparatus”, filed by Lyle D. Goodhue and William N. Sullivan (including dispenser drawing)
- Kimberley A. McGrath (Editor), Bridget E. Travers (Editor). World of Invention "Summary". Detroit: Thomson Gale. ISBN 0-7876-2759-3.CS1 maint: Extra text: authors list (link)
- Article “Aerosol Bomb”, by The Golden Home and High School Encyclopedia, Golden Press, New York, 1961.
- Article "Aerosols and Insects", by W.N. Sullivan, "The Yearbook of Agriculture - Insects", United States Department of Agriculture, 1952
- Core, Jim, Rosalie Marion Bliss, and Alfredo Flores. (September 2005) "ARS Partners With Defense Department To Protect Troops From Insect Vectors". Agricultural Research MagazineVol. 53, No. 9 .
- U.S. Patent 2,631,814 — Valve Mechanism for Dispensing Gases and Liquids Under Pressure; application September 28, 1949, issued March 17, 1953
- "Chloroflurocarbons CFCs History". Consumer Aerosol Products Council. Retrieved 2015-07-20.
- Clean Air Act Amendments of 1977 (91 Stat. 685, p. 726)
- Weiss, Edith Brown (2009). "The Vienna Convention for the Protection of the Ozone Layer and the Montreal Protocol on Substances That Deplete the Ozone Layer" (PDF). United Nations Audiovisual Library of International Law. United Nations. Retrieved 20 July 2015.
- Nash, Eric R. (23 September 2013). "History of the Ozone Hole". NASA Ozone Hole Watch. NASA. Retrieved 2015-07-20.
- "The Accelerated Phaseout of Class I Ozone-Depleting Substances". United States Environmental Protection Agency. 19 August 2010. Retrieved 2015-07-20.
- "Solstice® Propellant Technical Information" (PDF). Honeywell.
- "Aerosol Propellant / Pressurisation Filling Machine - R + R Aerosol Systems Ltd". R + R Midlands Ltd. Retrieved 2019-02-19.
- US5941462A, Sandor, "Variable spray nozzle for product sprayer", published 1999
- image: aerosol and bov pressurized containers, illustration
- "Dust Off Death". snopes.com. Retrieved 2015-05-24.
- "Deodorant burns on the increase". ABC News. 10 July 2007.
- "Paint & Aerosol Safety". uvm.edu. The University of Vermont. Retrieved 20 July 2015.
- "Solstice Propellant for Aerosols". Honeywell Aerosols. Retrieved 11 March 2019.
|Wikimedia Commons has media related to Spray cans.| |
Today, most engraving is done by computer controlled machinery, or laser marking systems, especially for engraving in industrial applications.
Before the advent of modern engraving equipment, a handheld burin or stylus was used to engrave an object, and for some smaller processes and more intricate, decorative purposes, hand engraving is still used. Today it is restricted to a few narrow fields, but is still seen in jewelry, firearms, small decorative pieces and some musical instruments.
Most common however, are engraving machines controlled by CNC and CAD precision programming. Widely available engraving machines are fairly simple to use and are able to engrave a number of surfaces such as metal, glass or plastic. These are often able to engrave on both straight and curved surfaces. Diamonds are typically used as the stylus, especially for machines required to engrave on harder materials and metals.
Engraving equipment consists of three parts: a stylus or marking tool, a controller, and a surface. The stylus acts like a pencil, tracing designs on a surface. The controller, or computer, controls the laser’s direction, pressure and speed. The surface is the material that the stylus acts on, the engraver’s canvas.
A highly-skilled engraver can achieve high levels of detail in his or her work, and therefore engraving is often used for its counterfeiting restricting properties. Bank notes, checks and other confidential or restricted papers use engraving machines to achieve details which cannot be replicated by standard printers.
Engraving can also be used for decorative purposes – to inscribe a pattern or image onto the surface of an object, or it may be used to create a template die or stamp for printing the pattern or image onto another surface or material. Engraving can be a fairly quick process depending on the intricacy of the pattern or design. Simpler designs can be completed by a computer controlled engraving machine in a matter of minutes.
Manual engraving will take longer and typically with less precise results. Laser engraving is the fastest engraving method and since laser markers do not have tool heads or machines which wear out, after the initial expense of buying the machine, they are a fairly economical marking method choice. Laser engraving is a clean and permanent process that doesn’t damage the product being engraved. It also doesn’t produce any by-products.
The most common type of laser engraving equipment is the X-Y table, where the workpiece is stationary and the laser moves in X and Y directions, creating vectors. |
As the number of climbers visiting the park has increased through the years, the impacts of climbing have become much more obvious. Some of those impacts include: soil compaction, erosion, and vegetation loss in parking areas, at the base of climbs, and on approach and descent trails, destruction of cliffside vegetation and lichen, disturbance of cliff-dwelling animals, litter, water pollution from improper human waste disposal, and the visual blight of chalk marks, pin scars, bolts, rappel slings, and fixed ropes. Many of these impacts can be eliminated or greatly reduced by following the minimum impact practices outlined in the conservation guidelines offered on this page. The impacts of your actions may seem insignificant, but when multiplied by the thousands of people who climb here every year they can have a significant, long lasting effect.
Your help is needed to ensure that Yosemite remains a beautiful and healthy place for the future.
What you can do:
- Read and follow the guidelines and regulations.
- If you see climbers who are not following these guidelines, talk to them. Explain how they can minimize their impact, and why it is important that they do so.
- Clean up after others. Pick up trash when you see it, or return with friends on a rest day and do a thorough clean-up. Take part in organized clean-ups and other projects.
- Climb safely! Rescues endanger rescuers' lives, are expensive, and cause a lot of impact.
- Keep informed about closed areas, and respect these closures.
More than 100 climbing accidents occur in Yosemite each year; of these, 15-25 parties require a rescue. Climbing in Yosemite has inherent risks and climbers assume complete responsibility for their own safety. The National Park Service does not maintain routes; loose rock and other hazards can exist on any route. Rescue is not a certainty. If you get into difficulties, be prepared to get yourself out of them. Know what to do in any emergency, including injuries, evacuations, unplanned bivouacs, or rapid changes in weather. Safety depends on having the right gear and the right attitude. Practice self-rescue techniques before you need them! Courtesy is an element of safety. Falling rock or gear is a serious hazard. Be careful when climbing above others. Do not create a dangerous situation by passing another party without their consent.
The Yosemite Medical Clinic, located between Yosemite Village and The Majestic Yosemite Hotel (formerly The Ahwahnee), is equipped to handle climbing injuries. If you cannot get to the clinic on your own, call 911 for assistance.
If you are injured or stranded while on a climb and cannot self-rescue, yell for help to obtain assistance. If you require a helicopter evacuation, do only and exactly what you are told by rescue personnel.
At the current time, wilderness permits are not required for nights spent on a wall. It is illegal to camp at the base of any wall in Yosemite Valley. If you must bivouac on the summit, you are required to follow all regulations:
- Do not litter, toss, or cache anything. If you hauled it up, you can carry it down.
- If you must have a fire, use an existing fire ring.
- Do not build windbreaks, platforms, or other "improvements."
Half Dome: Camping at the base of Half Dome is legal, but a wilderness permit is required. Camping on the summit of Half Dome is prohibited.
- Fight litter! Don't toss anything off a wall, even if you intend to pick it up later. Don't leave food or water at the top or on ledges for future parties. Set a good example by picking up any litter you see, including tape wads and cigarette butts.
- Don't leave fixed ropes as permanent fixtures on approaches and descents. These are considered abandoned property and will be removed.
- Minimize erosion on your approach and descent. If an obvious main trail has been created, use it. Go slow on the way down to avoid pushing soil down the hill. Avoid walking on vegetation whenever possible.
- If you need to build a fire for survival during an unplanned bivouac on the summit, use an existing fire ring. Building a new fire ring or windbreak is prohibited. Make sure your fire is completely out before you leave.
- Clean extra, rotting slings off anchors when you descend. Bring earth-toned slings to leave on anchors.
- Check the Camp 4 kiosk or the Mountain Shop for the current Peregrine Falcon closures.
- On first ascents: Please think about the impacts that will be caused by your new climb- Is the approach susceptible to erosion? Is there a lot of vegetation on the rock? "Gardening" (i.e., killing plants), is illegal in Yosemite. Can the climb be done with a minimum of bolts? Motorized drills are prohibited.
Climbing Instruction and Guide Service
Contact Yosemite Mountaineering School and Guide Service at 209/372-1000 for information on rates and schedules. |
It is the bigger and older trees that provide resources in the abundance required by numerous animals. It may take a tree one or two decades before they begin to flower and set seed, which they produce in increasing abundance as they mature. Numerous species of invertebrates, many birds, and a variety of mammals feed on these flowers and seeds. As they mature their trunks and leaves also exude a variety of sweet substances used by many species. Invertebrates harbour within their rough and shedding bark where they are eagerly sought out for food. Yellow-bellied and Squirrel Gliders chew channels through their bark to tap trees for sap. As the trunks and branches thicken the trees provide more stable nesting and roosting sites, while enabling Koalas to hug them on hot days to keep cool.
Once a eucalypt tree is over 120-180 years old they may start to develop hollows in their branches and trunks. In NSW at least 46 mammals, 81 birds, 31 reptiles and 16 frogs, are reliant on tree hollows for shelter and nests. As the trees get bigger so do their hollows, and it may not be until they are over 220 years old that they develop hollows big enough for the largest species. Most eucalypts may only live for 300-500 years, though some are reputed to live for over 1,000 years (see The Importance of Old Trees).
photo: Dailan Pugh OAM
Crown of a Sydney Blue Gum (Koreelah SF) hundreds of years old showing the numerous broken branches and large hollows necessary for large-hollow dependent fauna
Natural forests may support 13–27 hollow-bearing trees per hectare, with numbers varying between species, and increasing on more productive, moister and flatter sites. On agricultural lands the numbers of hollow-bearing trees have been drastically reduced. Similarly they have been significantly reduced throughout the remnant forests by logging, prescribed burning and by culling in Timber Stand Improvement operations.
In State forests in north-east NSW logging prescriptions now require the retention of an average of 5 hollow-bearing trees per hectare within logging areas, though numbers have already been reduced below this level in many forests. Where retained, hollow-bearing trees continue to decline with each logging due to token implementation of prescriptions, poor tree selection, inadequate protection, damage during logging and in post-logging burns, and lax enforcement. (see Protecting Habitat Trees)
Natural forests are generally multi-aged, so that as existing hollow-bearing trees die and collapse there are new trees with developing hollows to replace them (see The Importance of Old Trees). To account for this, logging prescriptions require the retention of an additional 5 sound and healthy mature trees per hectare as recruitment trees to be able to develop into the hollow-bearing trees of the future. Trees meeting this definition are also high-quality sawlogs so the Forestry Corporation go to extremes to avoid their obligations to protect them. This up and coming cohort of future hollow bearing trees is rapidly declining due to natural mortality and logging, along with token implementation of prescriptions, poor tree selection, inadequate protection, damage during logging and in post-logging burns, and lax enforcement . (see Protecting Habitat Trees)
If we are to minimise the hiatus in the availability of hollows for a plethora of native species we must act now to protect, as far as possible, all large old trees, along with sufficient recruitment habit trees to replace existing hollow-bearing trees as they die and to restore hollow-bearing trees throughout native forests. |
Teachers might want to think twice about posting no gum-chewing signs in the classroom. It turns out that the sticky substance might help students concentrate.
Researchers had two groups of 20 people each listen to a 30-minute recording that included a sequence of numbers. After listening, the participants were asked to remember the sequence. But only one groups chewed gum—and they had higher accuracy rates and faster reaction times than the non-gum chewers. Those chewing gum also maintained focus longer during the exercise. The study is in the British Journal of Psychology—and contradicts a 2012 study that found gum chewing decreased short-term memory performance. [Kate Morgan, Andrew J. Johnson and Christopher Miles, Chewing gum moderates the vigilance decrement]
The researchers say that gum increases the flow of oxygen to regions of the brain responsible for attention. More oxygen can keep people alert and improve their reflexes. Research also shows that you won’t get the same effect by just pretending to chew gum.
So the next time your mind is wandering in class, maybe try some gum. If it doesn’t help you concentrate you’ll at least be asked to leave.
[The above text is a transcript of this podcast.] |
CONVERSION BETWEEN METRIC AND U.S. CUSTOMARY UNITS
FROM METRIC TO U.S. CUSTOMARY
UNITS OF THE INTERNATIONAL SYSTEM
The International System (abbreviated SI, for Système International, the French name for the system) was adopted in 1960 by the 11th General Conference on Weights and Measures. An expanded and modified version of the metric system, the International System addresses the needs of modern science for additional and more accurate units of measurement. The key features of the International System are decimalization, a system of prefixes, and a standard defined in terms of an invariable physical measure.
The International System has base units from which all others in the system are derived. The standards for the base units, except for the kilogram, are defined by unchanging and reproducible physical occurences. For example, the meter is defined as the distance traveled by light in a vacuum in 1/299,792,458 of a second. The standard for the kilogram is a platinum-iridium cylinder kept at the International Bureau of Weights and Standards in Sèvres, France.
A multiple of a unit in the International System is formed by adding a prefix to the name of that unit. The prefixes change the magnitude of the unit by orders of ten from 1024 to 10-24.
Most of the units in the International System are derived units, that is units defined in terms of base units and supplementary units. Derived units can be divided into two groups those that have a special name and symbol, and those that do not.
meas ure ment
2. the measurement of a part of a figure as a fraction of the total figure s height. autometric, adj.
2. the measuring and recording of the angular oscillations of an aircraft in flight, with respect to an axis or axes flxed in space. kymograph, n. kymographic, adj.
2. the branch of geometry dealing with measurement of length, area, or volume. mensurate, mensurational, adj.
2. the process of determining the specific gravity of a liquid. stereometric, adj.
A measurement is a result obtained by measuring something.
You do not use ‘measurement’ to refer to an action taken by a government. The word you use is measure.
to take sb’s measurements tomar las medidas a algn
What is your waist measurement? Quel est votre tour de taille?
to take sb’s measurements prendre les mesures de qn
besides etc) to judge in comparison with. She measured her skill in cooking against her friend’s. vergelyk يُقَدِّر بالمُقارَنَه مع сверявам medir srovnávat, poměřovat messen måle κρίνω, εκτιμώ evaluar kõrvutama مقایسه کردن vertailla comparer לְהַשווֹת עִם तुलना करना okušati összemér menilai bera saman við misurare, giudicare 比べる 비교하여 판단하다 išbandyti (jėgas su) novērtēt; samērot menilai meten met måle seg med/mot, prøve krefter med zmierzyć się z kimś مقايسه كول medir a măsura/a compara (cu) сравнивать, меряться porovnávať primerjati upoređivati mäta เปรียบเทียบ boy ölçüşmek, ölçüp karşılaştırmak 衡量 помірятися силами مقابلہ کرنا so với 比较 |
From educators to leaders in industry, there is broad agreement that U.S. schools have a crucial challenge in improving teaching and learning in science, technology, engineering and math (STEM) among students from kindergarten through high school. A background in STEM is not only essential to many current and future careers; it is also a means for citizens to understand and participate in an increasingly complex world--from understanding the challenges of environmental sustainability to addressing the need for alternative sources of energy.
The NRC report, "Successful K-12 STEM Education," is a response to a request from a member of Congress, Rep. Frank Wolf, to identify the characteristics of highly successful K-12 schools and programs in STEM. The report was prepared by a committee of educators led by Adam Gamoran of the Department of Sociology and Wisconsin Center for Education Research at the University of Wisconsin-Madison. The committee's work included examining existing research and research in progress on STEM-focused schools, as well as a broader base of research related to effective STEM education practices and effective schooling in general. The committee also conducted a public workshop to explore criteria for identifying highly successful K-12 schools and programs in the area of STEM education through examination of a select set of examples.
The report offers two sets of recommendations, geared for schools and districts, and for state and national policy-makers. They are summarized as follows.
Districts seeking to improve STEM outcomes should:
- Consider the adoption of STEM-focused schools. The report identifies three models for such schools: selective STEM Schools for academically talented students, who need to apply for admission; inclusive STEM high schools, often referred to as "magnet schools;" and schools and programs with STEM-focused career and technical education.
- Devote adequate instructional time and resources to science in grades K-5.
- Ensure that their STEM curricula are focused on the most important topics in each discipline, are rigorous, and are articulated as a sequence of topics and performances.
- Enhance the capacity of K-12 teachers.
- Provide instructional leaders with professional development that helps them create the school conditions that appear to support student achievement.
- Elevate science to the same level of importance as reading and mathematics.
- Develop effective systems of assessment that are aligned with the next generation of science standards and that emphasize science practices rather than mere factual recall.
- Invest in a coherent, focused, and sustained set of support for STEM teachers.
- Support key areas for future research.
"The National Research Council, through leading education researchers, has done a thorough job of identifying evidence-based directions for successful K-12 STEM education," said Joan Ferrini-Mundy, NSF assistant director for Education and Human Resources. "This report will guide a number of follow-up and implementation activities to bring the results to practitioners, state and local STEM education leaders, and others.
Explore further: New 'Surveyman' software promises to revolutionize survey design and accuracy |
From The Art and Popular Culture Encyclopedia
In India, the Kushan Empire is replaced by the Gupta Empire. China is in the Three Kingdoms period. The Xiongnu form the Tiefu state under Liu Qubei. Korea is ruled by the Three Kingdoms of Korea. Japan enters the Kofun period.
After the death of Commodus in the previous century the Roman Empire was plunged into a civil war. When the dust settled, Septimius Severus emerged as emperor, establishing the Severan dynasty. Unlike previous emperors, he openly used the army to back his authority, and paid them well to do so. The regime he created is known as the Military Monarchy as a result. The system fell apart in the 230s, giving way to a fifty-year period known as the Military Anarchy or the Crisis of the Third Century, where no fewer than twenty emperors held the reins of power, most for only a few months. The majority of these men were assassinated, or killed in battle, and the empire almost collapsed under the weight of the political upheaval, as well as the growing Persian threat in the east. Under its new Sassanid rulers, Persia had grown into a rival superpower, and the Romans would have to make drastic reforms in order to better prepare their state for a confrontation. These reforms were finally realized late in the century under the reign of Diocletian, one of them being to divide the empire into an eastern and western half, and have a separate ruler for each.
- Early 3rd century - Burial in catacombs becomes common.
- 208: the Chinese naval Battle of Red Cliffs occurs.
- Early 3rd century - Caracalla, is made. It is now kept at The Metropolitan Museum of Art, New York.
- 212: Constitutio Antoniniana grants citizenship to all free Roman men.
- 212 – 216: Baths of Caracalla.
- 220: The Han Dynasty comes to an end with establishment of the Three Kingdoms in ancient China.
- 230 – 232: Sassanid dynasty of Persia launches a war to reconquer lost lands in the Roman east.
- 235 – 284: Crisis of the Third Century shakes Roman Empire.
- 250 – 538: Kofun era, the first part of the Kofun period in Japan.
- 258: Valerian's Massacre of Christians.
- 260: Roman Emperor Valerian I is taken captive by Shapur I of Persia.
- 265: The Jin Dynasty reunites China under one empire after the conquest of Eastern Wu.
- Sarnath becomes a center of Buddhist arts in India.
- Diffusion of maize as a food crop from Mexico into North America begins.
- The Kingdom of Funan reaches its zenith under the rule of Fan Shih-man.
- The Goths move from Gothiscandza to Ukraine, giving birth to the Chernyakhov culture.
- Menorahs and Ark of the Covenant, wall painting in a Jewish catacomb, Villa Torlonia (Rome), is made.
- Siddhartha in the Palace, detail of a relief from Nagarjunakonda, Andhra Pradesh, India, is made. Later Andhra period. It is now kept at National Museum, New Delhi, (approximate date).
- Probably 3rd century - Jonah Swallowed and Jonah Cast Up, two statuettes of a group from the eastern Mediterranean, probably Asia Minor, are made. They are not kept at The Cleveland Museum of Art.
- Late 3rd century-early 4th century - Good Shepherd, Orants and Story of Jonah, painted ceiling of the Catacombs of Marcellinus and Peter, Rome, is made.
- Clement of Alexandria
- Diocletian, Roman emperor
- Diophantus of Alexandria, wrote Arithmetica
- Hippolytus, considered first Antipope
- Liu Hui, Chinese mathematician
- Mani (prophet), founder of Manichaeism
- Pappus of Alexandria, Greek mathematician
- Plotinus, founder of Neoplatonism
- Tertullian, sometimes called father of Latin church
- Wang Bi, Taoist
- M. Sattonius Iucundus, restorer of the Thermae in Heerlen
- Zhuge Liang, known as the greatest strategist during the period of the Three Kingdoms
- Liu Bei, founding emperor of the Kingdom of Shu
- Cao Cao, founding emperor of the Kingdom of Wei
Inventions, discoveries, introductions
- A primitive form of eyeglasses were developed for a nearsighted princess in Syria.
- The South Pointing Chariot invented by Ma Jun, a wheeled mechanical device that acts as a directional compass |
A history of the printing press
Johannes gutenberg invented the printing press in 1440 he knew that wood block printing took a long time because you had to carve each letter. In 1450 johannes gutenberg made his first printing press learn more about this revolutionary invention and the man behind it. Most of us tend to take printed materials for granted, but imagine life today if the printing press had never been invented we would not have books, magazines or. World history one dbq: the printing press the following task is based on the accompanying documents 1-7 some documents have been edited for this exercise.
Fascinating facts about the invention of the printing press by johannes gutenberg in 1440. In this lesson, we will learn about johannes gutenberg, the inventor of the mechanical moving-type printing press we will explore his life and. American printing history association (apha) is a membership organization that encourages the study of the history of printing and related arts and crafts. For the first time in history latest on who invented the printing press rare page from england’s first printer found in library | video.
11 innovations that changed history author evan andrews website name historycom the printing press proved so influential in prompting revolutions. Printing with paper the gutenberg press a colossal moment in the history of information and learning with access to printing presses, scientists.
History of printing timeline heidelberg printing press manufacturer established in heidelberg american printing history association founded. Key historical developments in printing technology another chinese man named pi sheng created a moveable type press by making history 1430s intaglio. According to david ramsay, one of the first historians of the american revolution, “in establishing american independence, the pen and press had merit equal to that. History of the printing press - in the mid-1400's johannes gutenberg invented a printing press which revolutionized the production of books books could be printed in.
The history of the printing press is remarkable part i of our series looks at the gutenberg, stanhope and columbian presses. Read the fascinating story of johann gutenberg, inventor of the movable-type printing press, whose latin bible was the first book ever printed in 1455. History of printing including saints and playing cards, gutenberg, the spread of printing, the illustrated book, the power of the press, woodcut, engraving and.
A history of the printing press
Amazing kids from history interviews amazing kids we no longer use the old printing presses but it’s thanks to gutenberg’s printing press that we read the. Introduction at the height of the hussite crisis in the early 1400's, when the authorities ordered 200 manuscripts of heretical writings burned, people on both sides. Printing history – history of the printing press posted by admin in apr, 2016 crude wooden hand presses allowed the printer to transfer ink to paper.
Johannes gutenberg's contributions to printing and its ff7ebb48a561/johannes_gutenberg-history_of_mechanical inventing the printing press. Printing: history and development overview johannes gutenberg's invention of the printing press is widely thought of as the origin of mass communication-- it marked. Printing is the process of reproducing copies of an original image or writing with ink read more about the history of printing, facts and the development. History of the printing press a history of the printing press, including a discussion of johannes gutenberg's work encyclopædia britannica, inc.
The invention of printing is generally conceded to be one of the defining inventions for the advancement of civilization gutenberg's movable type printing press. The history of printing goes back to the duplication of images by means of stamps in very early times the use of round seals for rolling an impression. Books before and after the gutenberg bible for approximately 4,500 years before gutenberg invented the printing press, books were produced by hand. Illustration of stanhope press, (1800) from robert hoe's a short history of the printing press and of improvements in printing machinery from the time of gutenberg up. |
This is a great resources to use during you math instruction. The packet is designed to be used as an everyday practice for the OA strand of standards. Each day student practice a word problem, true or false number sentence, missing addend, addition or subtraction equation, and one teacher pick problem. All you need to do is print the student page and copy for each student and print the headers!
Post the headers on a board or bulletin board and you create the math problems. Students will complete the problems on their student board. This is a great way to practice standards, open up a math lesson, or integrate into your weekly math plans! |
Fibrosis refers to scar tissue that has replaced healthy tissue. This is what happens in the lungs of people with pulmonary fibrosis. Inflammation (swelling) in the lungs usually happens before or at the same time as the formation of scar tissues.
There are several substances known to cause lung fibrosis, but people often develop lung fibrosis even when there is no apparent cause. When the cause is unknown, it's called idiopathic.
Idiopathic pulmonary fibrosis is a serious condition whose cause is not well understood. Another condition very similar to idiopathic pulmonary fibrosis can happen in some people with certain diseases, especially autoimmune diseases like systemic lupus erythematosis or scleroderma. Whether this other condition is the same thing as idiopathic pulmonary fibrosis or slightly different is unknown.
When pulmonary fibrosis is idiopathic, it most often occurs in people 50 years of age and older, but people of any age can develop it. Pulmonary fibrosis can be detected at an early stage or late stage but usually gets worse with time. Sometimes it progresses slowly but it can also progress quickly over just a few years or even months.
There are many potential causes of pulmonary fibrosis, such as:
The cause of pulmonary fibrosis, especially when it is idiopathic, is poorly understood. It probably involves deregulation of the immune system in the lungs, but some experts still think it might be caused by an unknown environmental exposure, or even an unusual infection.
A few families are particularly affected by idiopathic pulmonary fibrosis, which may be categorized into two forms - an environmental form and a rarer genetic form. Pulmonary fibrosis is more likely caused by environmental factors in genetically susceptible people. These people have immune systems that overreact in the presence of particular irritants or organisms. This would be typical of autoimmune disease.
The mechanism of the disease is as follows: The lungs become inflamed, usually for no clear reason. White blood cells and liquid fill the alveoli (the lung's tiny air pockets where oxygen is transferred to the blood). If the liquid remains for long enough, blood-clotting agents solidify, leaving scars that can interfere with the function of the alveoli.
The blood vessels of the lungs are separated from the air pockets by walls called interstitia. The interstitium allows oxygen to reach the blood, and carbon dioxide from the blood to pass into the lungs to be breathed out. Fibrosis damages this membrane, thickening it and thus reducing the lungs' ability to add oxygen and remove carbon dioxide from the blood.
For the majority of people, the symptoms of pulmonary fibrosis come on slowly over the course of months to years, but for some people the symptoms can develop more rapidly.
Most people with pulmonary fibrosis first see their doctor about increasing shortness of breath during exercise. Some also have a cough. These are often the only symptoms of early pulmonary fibrosis, but you might also feel one or more of the following symptoms:
Later on, symptoms can include:
Pulmonary fibrosis can lead to several severe complications. Because the lungs don't take in oxygen as well, low blood oxygen levels (hypoxemia) can develop. Lack of oxygen can affect the entire body.
Another complication of pulmonary fibrosis is pulmonary hypertension (high blood pressure in the arteries of the lungs). Scar tissue in the lungs can make it more difficult for blood to flow through them. The increased pressure makes the heart work harder and leads to a weakened and enlarged heart, reducing its pumping efficiency and producing heart failure. This is suspected when people develop fluid accumulations in the abdomen, leg swelling, or prominent pulsations in neck veins.
Did you find what you were looking for on our website? Please let us know. |
Anxiety is a condition of persistent and uncontrollable nervousness, stress, and worry that is triggered by anticipation of future events, memories of past events, or ruminations over day-to-day events, both trivial and major, with disproportionate fears of catastrophic consequences.
Stimulated by real or imagined dangers, anxiety affects people of all ages and social backgrounds. When it occurs in unrealistic situations or with unusual intensity, it can disrupt everyday life. Some researchers believe anxiety is synonymous with fear , occurring in varying degrees and in situations in which people feel threatened by some danger. Others describe anxiety as an unpleasant emotion caused by unidentifiable dangers or dangers that, in reality, pose no threat. Unlike fear, which is caused by realistic, known dangers, anxiety can be more difficult to identify and alleviate.
A small amount of anxiety is normal in the developing child, especially among adolescents and teens. Anxiety is often a realistic response to new roles and responsibilities, as well as to sexual and identity development. When symptoms become extreme, disabling, and/or when children or adolescents experience several symptoms over a period of a month or more, these symptoms may be a sign of an anxiety disorder, and professional intervention may be necessary. Two common forms of childhood anxiety are general anxiety disorder (GAD) and separation anxiety disorder (SAD), although many physicians and psychologists also include panic disorder and obsessive-compulsive disorder , which tend to occur more frequently in adults. Anxiety that is the result of experiencing a violent event, disaster, or physical abuse is identified as post-traumatic stress disorder (PTSD). Most adult anxiety disorders begin in adolescence or young adulthood and are more common among women than men.
According to the U.S. surgeon general, 13 percent, or over 6 million children, suffer from anxiety, making it the most common emotional problem in children. Among adolescents, more girls than boys are affected. About half of the children and adolescents with anxiety disorders also have a second anxiety disorder or other mental or behavioral disorder, such as depression.
Causes and symptoms
A child's genetics, biochemistry, environment, history, and psychological profile all seem to contribute to the development of anxiety disorders. Most children with these disorders seem to have a biological vulnerability to stress, making them more susceptible to environmental stimuli than the rest of the population.
Emotional and behavioral symptoms of anxiety disorders include tension; self-consciousness; new or recurring fears (such as fear of the dark, fear of being alone, or fear of strangers); self-doubt and questioning; crying and whining; worries; constant need for reassurance (clinging to parent and unwilling to let the parent out of sight); distractibility; decreased appetite or other changes in eating habits; inability to control emotions; feeling as if one is about to have a heart attack, die, or go insane; nightmares ; irritability, stubbornness, and anger; regression to behaviors that are typical of an earlier developmental stage; and unwillingness to participate in family and school activities. Physical symptoms include rapid heartbeat; sweating; trembling; muscle aches (from tension); dry mouth; headache ; stomach distress; diarrhea; constipation ; frequent urination; new or recurrent bedwetting; stuttering ; hot flashes or chills; throat constriction (lump in the throat); sleep disturbances; and fatigue. Many of these anxiety symptoms are very similar to those of depression, and as many as 50 percent of children with anxiety also suffer from depression. Generally, physiological hyperarousal (excitedness, shortness of breath, the fight or flight response) characterizes anxiety disorders, whereas underarousal (lack of pleasure and feelings of guilt) characterizes depression. Other signs of anxiety problems are poor school performance, loss of interest in previously enjoyed activities, obsession about appearance or weight, social phobias (e.g., fear of walking into a room full of people), and the persistence of imaginary fears after ages six to eight. Children with anxiety disorders are often perfectionists and are concerned about "getting everything right," but rarely feel that their work is satisfactory.
Shyness does not necessarily indicate a disorder, unless it interferes with normal activities and occurs with other symptoms. A small proportion of children do experience social anxiety, incapacitating shyness that persists for months or more, which should be treated. Similarly, performance anxiety experienced before athletic, academic, or theatrical events does not indicate a disorder, unless it significantly interferes with the activity.
Separation anxiety disorder (SAD) is the most common anxiety disorder among children, affecting 2 to 3 percent of school-aged children. SAD involves extreme and disproportionate distress over day-to-day separation from parents or home and unrealistic fears of harm to self or loved ones. Approximately 75 to 85 percent of children who refuse to go to school have separation anxiety. Normal separation fears are outgrown by children by the ages of five or six, but SAD usually starts between the ages of seven and 11.
When to call the doctor
A qualified mental health professional should be consulted if a child's anxiety begins to affect his or her ability to perform the three main responsibilities of childhood: to learn, to make friends, and to have fun. Often fears and anxieties come and go with time and age. However, in some children, anxiety becomes severe, excessive, unreasonable, and long-lasting (usually considered as long-lasting if the child experiences the elevated level of anxiety for a month or more), interferes with the child's ability to function normally, and causes the child to be distraught and easily upset, thus necessitating professional intervention.
Diagnosing children with an anxiety disorder can be very difficult, since anxiety often results in disruptive behaviors that overlap with other disorders such as attention-deficit hyperactivity. Children showing signs of an anxiety disorder should first get a physical exam to rule out any possible illness or physical problem. Diagnosis of normal versus abnormal anxiety depends largely upon the degree of distress and its effect on a child's functioning. The degree of abnormality must be gauged within the context of the child's age and developmental level. The specific anxiety disorder is diagnosed by the pattern and intensity of symptoms using various psychological diagnostic tools.
Depending on the severity of the problem, treatments for anxiety include school counseling, family therapy , and cognitive-behavioral or dynamic psychotherapy, sometimes combined with antianxiety drugs. Therapies generally aim for support by providing a positive, entirely accepting, pressure-free environment in which to explore problems; by providing insight through discovering and working with the child or adolescent's underlying thoughts and beliefs; and by exposure through gradually reintroducing the anxiety-producing thoughts, people, situations, or events in a manner so as to confront them calmly. Relaxation techniques, including meditation, may be employed in order to control the symptoms of physiological arousal and provide a tool the child can use to control his or her response.
Creative visualization, sometimes called rehearsal imagery by actors and athletes, may also be used. In this technique, the child writes down (or draws pictures of) each detail of the anxiety-producing event or situation and imagines his or her movements in performing the activity. The child also learns to perform these techniques in new, unanticipated situations.
In severe cases of diagnosed anxiety disorders, anti-anxiety and/or antidepressant drugs may be prescribed in order to enable therapy and normal daily activities to continue. Previously, narcotics and other sedatives, drugs that are highly addictive and interfere with cognitive capacity, were prescribed. With pharmacological advances and the development of synthetic drugs, which act in specific ways on brain chemicals, a more refined set of antianxiety drugs became available. Studies have found that generalized anxiety responds well to these drugs (benxodiazepines are the most common), which serve to quell the physiological symptoms of anxiety. Other forms of anxiety such as panic attacks, in which the symptoms occur in isolated episodes and are predominantly physical (and the object of fear is vague, fantastic, or unknown), respond best to the antidepressant drugs. Childhood separation anxiety is thought to be included in this category. Psychoactive drugs should only be considered as a last treatment alternative, and extra caution should be used when they are prescribed for children.
Studies consistently report that anxiety disorders can be debilitating and impinge seriously on a person's quality of life. Despite their common occurrence, little is underbstood about the natural course of anxiety disorder. Adults experiencing anxiety disorders often report that they have felt anxious all of their lives, with one half of adults with general anxiety disorder reporting that the onset of the condition occurred during childhood or adolescence. Anxiety disorders can be chronic, and the severity of symptoms can fluctuate significantly, with symptoms being more severe when stressors are present. Without treatment, extended periods of remission are not likely.
Parents can help their child respond to stress by taking the following steps:
- providing a safe, secure, familiar, and consistent home life
- being selective in the types of television programs that children watch (including news shows), which can produce fears and anxieties
- spending calm and relaxed time with their child
- encouraging questions and expressions of fears, worries, or concerns
- listening to the child with encouragement and affection and without being critical
- rewarding (and not punishing) the child for effort rather than success
- providing the child with opportunities to make choices; with more control over situations, the child has a better response to stress
- involving the child in activities in which he or she can succeed and limiting events and situations that are stressful for the child
- developing an awareness of the situations and activities that are stressful for the child and recognizing signs of stress in the child
- keeping the child informed of necessary and anticipated changes (e.g., moving, change of school) that may cause the child to be stressed
- seeking professional help or advice when the symptoms of stress do not decrease or disappear
The child should also be encouraged to use various techniques to reduce stress, including the following strategies:
- talking about problems to parents or others whom the child trusts
- relaxing by listening to music, taking a warm bath, meditating, practicing breathing exercises, or participating in a favorite hobby or activity
- respecting themselves and others
- avoiding the use of drugs and alcohol
- feeling free to ask for help if he or she is having difficulties with stress management
Psychological —Pertaining to the mind, its mental processes, and its emotional makeup.
Psychotherapy —Psychological counseling that seeks to determine the underlying causes of a patient's depression. The form of this counseling may be cognitive/behavioral, interpersonal, or psychodynamic.
Shyness —The feeling of insecurity when among people, talking with people, or asking somebody a favor.
Stress —A physical and psychological response that results from being exposed to a demand or pressure.
Parenting an anxious child is difficult and can create stress within the entire family. Parents need to help the child learn and apply techniques to manage his or her anxiety. The use of support groups and professional assistance is recommended.
Parents of children with anxiety disorders may exhibit anxiety symptoms themselves and should also seek professional assistance.
Chansky, Tamar E. Freeing Your Child from Anxiety: Powerful, Practical Solutions to Overcome Your Child's Fears, Worries, and Phobias. New York: Broadway Books, 2004.
Dacey, John S., and Lisa B. Fiore. Your Anxious Child: How Parents and Teachers Can Relieve Anxiety in Children. New York: John Wiley & Sons, 2001.
Fox, Paul. The Worried Child: Recognizing Anxiety in Children and Helping Them Heal. Alameda, CA: Hunter House Publishers, 2004.
Rapee, Ron, Sue Spence, and Ann Wignall. Helping Your Anxious Child. Oakland, CA: New Harbinger Publications, 2000.
Spencer, Elizabeth, Robert L. Dupont, and Caroline M. Dupont. The Anxiety Cure for Kids: A Guide for Parents. New York: John Wiley & Sons Inc., 2003.
Wagner, Aureen Pinto Worried No More: Help and Hope for Anxious Children. Rochester, NY: Lighthouse Press Inc., 2002.
Anxiety Disorders Association of America. 8730 Georgia Avenue, Suite 600, Silver Spring, MD 20910. Web site: http://www.adaa.org.
National Institute of Mental Health (NIMH), Office of Communications. 6001 Executive Boulevard, Room 8184, MSC 9663, Bethesda, MD 20892-9663. Web site: http://www.nimh.nih.gov/.
The Child Anxiety Network. http://www.childanxiety.net/ (accessed October 11, 2004). |
Thermogenesis is a metabolic process during which your body burns calories to produce heat. Several factors induce thermogenesis in your body including exercise, diet and environmental temperature. Thermogenesis can promote weight loss because it increases your body's calorie burn. Although inducing thermogenesis can help you burn more calories, a low-calorie diet and regular physical activity are the best ways for you to lose body weight.
Video of the Day
During exercise, your muscle cells burn calories in order to provide energy for muscle contraction. Although most of the energy goes to propel the contraction, considerable amount of the energy is “lost” as heat. This thermogenic process is the reason your body temperature rises during exercise and why you begin to sweat. The harder you exercise, the more energy is wasted as heat. Although the major energy-burning effect of exercise is still the actual muscle contraction, you do burn considerable amount of calories as heat and the more calories you burn, the more weight you can lose.
Your body temperature is strictly regulated by the hypothalamus in your brain. This internal “thermostat” gets signals from receptors around your body that detect body temperature. When your body temperature begins to drop, for example in response to cold temperatures, your hypothalamus sends a signal to your muscles to contract. These muscle contractions, or shivering, help produce heat and warm your body. Thus, going to a cold climate can boost your metabolism through thermogenesis.
Thermogenic substances are naturally present in several food items. Caffeine in coffee, tea and chocolate, catechins in green, white and oolong tea, and capsaicins in red chili peppers can promote weight loss by temporarily increasing thermogenesis in your body. A study published in the International Journal of Obesity in 2005 reports that eating these thermogenic ingredients can boost your metabolism by 4 percent to 5 percent and fat burning by 10 percent to 16 percent.
Regular consumption of thermogenic foods is shown to boost your metabolism and promote weight loss. Adding these foods to your diet won't be a miracle potion that will melt away your fat, but they can help counteract the decrease in your metabolic rate that happens in response to a low-calorie diet and weight loss. Spending time in cold weather in hope of losing weight is not recommended and more research is needed to evaluate how this can promote weight loss. Regular exercise is still the best way to increase your metabolism and to burn extra pounds. |
A queue, in computer networking, is a collection of data packets collectively waiting to be transmitted by a network device using a per-defined structure methodology.
A bootstrap is the process of starting up a computer. It also refers to the program that initializes the operating system (OS) during start-up.
The term bootstrap or bootstrapping originated in the early 1950s. It referred to a bootstrap load button that was used to initiate a hardwired bootstrap program, or smaller program that executed a larger program such as the OS. The term was said to be derived from the expression “pulling yourself up by your own bootstraps;” starting small and loading programs one at a time while each program is “laced” or connected to the next program to be executed in sequence.
Bootstrap is the process of loading a set of instructions when a computer is first turned on or booted. During the start-up process, diagnostic tests are performed, such as the power-on self-test (POST), that set or check configurations for devices and implement routine testing for the connection of peripherals, hardware and external memory devices. The bootloader or bootstrap program is then loaded to initialize the OS.
Typical programs that load the OS are:
Prior to bootstrap a computer is said to start with a blank main memory and an intact magnetic core memory or kernel. The bootstrap allows the sequence of programs to load in order to initiate the OS. The OS is the main program that manages all programs that run on a computer and performs tasks such as controlling peripheral devices like a disc drive, managing directories and files, transmitting output signals to a monitor and identifying input signals from a keyboard.
Bootstrap can also refer to as preparing early programming environments incrementally to create more complex and user-friendly programming environments. For example, at one time the programming environment might have consisted of an assembler program and a simple text editor. Over time, gradual improvements have led to today's sophisticated object-oriented programming languages and graphical integrated development environments (IDEs).
Read More »
Join 138,000+ IT pros on our weekly newsletter
Home | Advertising Info | Write for Us | About | Contact Us
2010 - 2014
Partner Sites : |
About half of the surface warming that's helping shrink Greenland's glaciers is due to temperatures in the tropical Pacific Ocean, not greenhouse gases, a new study reports.
Sea-surface temperatures in the Pacific are already known to influence global weather patterns at lower latitudes. For example, the El Niño cycle shifts rainfall around the world, delivering precipitation to western North America and causing drought in Australia and Central America.
The new findings could explain why Greenland and the Canadian Arctic are getting hotter more quickly than other regions of the planet. The feverish temperature rise has puzzled scientists: The most up-to-date climate models, such as those in the fifth assessment report of the Intergovernmental Panel on Climate Change, fail to reproduce the rapid warming seen in the Arctic. [On Ice: Stunning Images of Canadian Arctic]
"We know that global warming due to human impacts can't explain why it got warm so fast," said lead study author Qinghua Ding, a climate scientist at the University of Washington.
Researchers have proposed several explanations for the speedy heating, such as a warmer Arctic Ocean from sea ice loss.
But Ding and his co-authors instead see a link between tropical sea-surface temperatures and the North Atlantic Oscillation, a climate pattern that dominates Arctic weather. Since the 1990s, warm sea-surface temperatures in the western Pacific and cool waters in the eastern Pacific have pushed the North Atlantic Oscillation (NAO) into a pattern that allows high pressure above Greenland and the Canadian Arctic. (High atmospheric pressure leads to warmer temperatures.)
"We find that 20 to 50 percent of the warming is due to anthropogenic [man-made] warming, and another 50 percent is natural," Ding said. The study was published today (May 7) in the journal Nature.
The NAO is a major climate player, as it affects the extent of Arctic sea ice; the path of the jet stream; and storm routes across North America, the Atlantic and Europe. Finding a connection between the NAO and the tropics could improve forecasts for the NAO, which has defied accurate prediction.
"The common sense was that the NAO is chaotic, not connected to tropical ocean conditions," said Shang-Ping Xie, a climate scientist at the Scripps Institute of Oceanography, who was not involved in the study. "An obvious implication is that this connection may be exploited to improve climate prediction over the extratropical North Atlantic where the current prediction skills are low."
The connection between the Pacific Ocean and Greenland comes from atmospheric pulses called Rossby waves. These are undulations in the high-altitude winds that race around the globe, such as the jet stream. The distribution of warm and cold air rising above the Pacific Ocean sets off a Rossby wave that eventually favors warmth over Greenland.
"It's like hitting the atmosphere with a hammer in a very specific region, which generates a wave train that causes high pressure over Greenland," Ding said.
New connections to explore
Tropical ocean temperatures have only been closely watched since 1979, with the advent of satellites, so the researchers don't know if the Pacific temperature cycle is short lived or if it has settled in for decades.
"So far our data is really quite short, so we're not sure what the real cause is," Ding told Live Science.
However, the Pacific Ocean warmth is not the same as the El Niño cycle, Ding said. The researchers plan to explore whether the ocean temperatures could be linked to other known climate cycles, such as the Pacific Decadal Oscillation, or if this is a newly discovered variation.
"This study shows how complex regional climate change is," said Juergen Bader, a climate scientist at the Max Planck Institute for Meteorology in Germany, who was not involved in the study. "Even remote processes can have an important impact on the regional climate." [6 Unexpected Effects of Climate Change ]
If the Pacific temperature pattern shifts, warming in the Arctic could slow in coming decades, Ding said. Some evidence already hints this is the case, such as jet stream pattern that socked the East Coast with an extremely cold winter this year. However, human-driven global warming is likely to outpace any natural cooling in coming decades, researchers said.
"It is only a question of time before external forcing [man-made warming] dominates regional Arctic warming," Bader said. "So the role of natural climate variability on certain Arctic warming patterns might be reduced in the long run." |
How Do Computers Work? To accomplish a task using a computer, you need a combination of hardware, software, and input
Hardware consists of devices, like the computer itself, the monitor, keyboard, printer, mouse and speakers.
• Most of the essential things that make computers work are inside the case, away from your eyes. The motherboard is central point of the computer, where all the various components attach and communicate with each other.
Key to allowing a computer to work is the central processing unit (CPU), the central stop for all the processes the computer goes through. As a command is sent, such as "open a program" or "turn the monitor on," the CPU interprets this order and then acts accordingly.
• Once the computer is turned on, or booted up, the CPU goes on to activate certain sections so that it can then give you access to programs and processes. Computers work based on the CPU granting access to users, so if the booting up process malfunctions, it can mean that the computer cannot be used, even if everything else inside is working properly
• Memory is also extremely important to allow a computer to work. The two main kinds of memory are Random Access Memory (RAM) and Read-Only Memory (ROM). ROM is stored data, and cannot be written to; RAM is memory that can be read from and written to, allowing new data to be saved. In many cases, additional RAM can be added.
Other less central — but no less vital — parts that let computers work include the power supply, transformer, and battery. These parts make sure each component gets the electricity it needs in the proper amount, and that key information is saved even when the power is off. The computer drives, including hard drives, flash drives, and any drives with removable media, such as CD-ROM drives, allow the user to upload new data and applications to the computer and save files. The cooling system helps keep all of the components from overheating
• Most computers also have other components without which a computer would be more difficult to use. Graphics cards allow the computer to display graphics on the monitor, and come in many different levels. Sounds cards allow the computer to play sounds. Connecting to the Internet or other computers requires a modem |
The Cree (nehiyawak in the Cree language) are the most populous and widely distributed Aboriginal people in Canada.
The Cree (nehiyawak in the Cree language) are the most populous and widely distributed Aboriginal people in Canada. Cree First Nations occupy territory in the Subarctic region from Alberta to Québec, as well as portions of the Plains region in Alberta and Saskatchewan. As of March 2015, the registered population of Cree First Nations was more than 317,000. This number is an approximation; some Cree First Nations have members who are not Cree, or who have blended identities. In addition, this population figure does not take into account anyone who may have lost or been denied status through enfranchisement. In 2011, the National Household Survey recorded more than 95,000 speakers of Cree.
Language, Geography and Population
Cree live in areas from Alberta to Québec in the Subarctic and Plains regions, a geographic distribution larger than that of any other Aboriginal group in Canada. Moving from west to east the main divisions of Cree, based on environment and dialect, are the Plains (Alberta and Saskatchewan), Woods (Saskatchewan and Manitoba), Swampy (Saskatchewan, Manitoba and Ontario), Moose (Ontario) and James Bay/Eastern (Québec) Cree.The Eastern Cree are closely related, in both culture and language, to the Innu and Atikamekw. Many Cree First Nations in western provinces have blended populations of Ojibwa, Saulteaux, Assiniboine, Denesuline and others. In addition, the Oji-Cree of Manitoba and Ontario are a distinct people of mixed Cree and Ojibwa culture and heritage. The Cree language belongs to the Algonquian language family, and the people historically had relations with other Algonquian-speaking nations, most directly with the Innu (Montagnais-Naskapi), Algonquin and Ojibwa.
The name Cree originated with a group of Indigenous people near James Bay whose name was recorded by the French as Kiristinon and later contracted to Cri, spelled Cree in English. Most Cree use this name only when speaking or writing in English and have other, more localized names. Nehiyawak is the Cree name for the Cree people, though it is often also used to describe Plains Cree. Plains Cree (paskwâwiyiniwak or nehiyawak), Woods Cree (sakâwiyiniwak), Swampy Cree (maskêkowiyiniwak), and James Bay/Eastern Cree (Eeyouch) are the major linguistic and geographic divisions; Moose Cree is considered a sub-group/dialect of Swampy Cree. The suffix –iyiniwak, meaning people, is used to distinguish people of particular sub-groups. For example, the kâ-têpwêwisîpîwiyiniwak are the Calling River People, while the amiskowacîwiyiniwak are the Beaver Hills People.
Dialects of Cree are generally more mutually intelligible — understandable to each different speaker — the closer the speakers’ communities are. The Eastern Cree dialect is more closely related to Innu-aimun, the Innu language, and is therefore less intelligible to western dialect speakers. Michif, the language of the Métis, is also considered a dialect of Cree, and Oji-Cree, a dialect of Ojibwa, is heavily influenced by Cree. In 1988, linguists Richard A. Rhodes and Evelyn M. Todd counted 80 dialectical variations of Cree (including Innu-aimun). In 2011, the National Household Survey reported more than 95,000 speakers of Cree, with a further 6,000 speakers of Atikamekw, 12,000 speakers of Innu-aimun, 10,000 speakers of Oji-Cree and 1,000 Michif speakers.
As of March 2015 the total registered population of more than 130 Cree First Nations was approximately 317,000, of which approximately 170,000 (54 per cent) lived on reserve. Saskatchewan nations had the largest approximate registered population with 115,000, followed by Manitoba with 81,000, Alberta with 78,000, and Ontario and Québec with 25,000 and 18,000 respectively. It is important to note that these numbers include several blended communities with more than just Cree people. As well, they do not reflect Cree who themselves, or through their ancestors, have lost or been denied status through enfranchisement or other historical injustices under the Indian Act.
For thousands of years the ancestors of the Cree were thinly spread over much of the woodland area that they still occupy. After the arrival of Europeans, participation in the fur trade pushed Swampy Cree into the Plains. During this time many Cree remained in the boreal forest and the tundra area to the north, where a stable culture persisted. They lived by hunting moose, caribou, smaller game, geese, ducks and fish, which they preserved by drying over fire.
They travelled by canoein summer and by snowshoes and tobogganin winter, living in conical or dome-shaped lodges, clothed in animal skins and making tools from wood, bone, hide and stone. Later, during the fur trade period, they traded meat, furs and other goods in exchange for metal tools, twine and European goods. Plains Cree exchanged the canoe for horses, and subsisted primarily through the buffalo hunt, and developed cultural practices, like the Sun Dance, separately from their Subarctic relations.
Cree lived in small bands or hunting groups for most of the year, and gathered into larger groups in the summer for socializing, exchanges and ceremonies. Religious life was based on relations with animal and other spirits which revealed themselves in dreams. People tried to show respect for each other by an ideal ethic of non-interference, in which each individual was responsible for his or her actions and the consequences of those actions. Food was always the first priority, and would be shared in times of hardship or in times of plenty when people gathered to celebrate by feasting.
Although the ideal was communal and egalitarian, some individuals were regarded as more powerful, both in the practical activities of hunting and in the spiritual activities that influenced other persons (see Shaman). Leaders in group hunts, raids and trading were granted authority in directing such tasks, but otherwise the ideal was to lead by means of exemplary action and discreet suggestion. The Cree worldview incorporates Trickster(wîsahkêcâhk) mythology and describes the interconnectivity between people and nature. (See alsoReligion of Aboriginal People.)
Contact with Europeans
Jesuit missionaries first mentioned contact with Cree groups in the area west of James Bay around 1640. Fur trading posts established after 1670 began a period of economically motivated migration, as bands attempted to make the most of the growing fur trade. For many years European traders depended on Aboriginal people for fresh meat. Gradually an increasing number of Cree remained near the posts, hunting and doing odd jobs and becoming involved in the church, schools and nursing stations. Missionizing began when some fur traders held services; trained Christian missionaries soon followed.
During the late 1700s and the 1800s, Cree who had migrated to the Plains changed with rapid, dramatic success from trappers and hunters of the forest to horse-mounted warriors and bison hunters. Epidemics, the destruction of the bison herds, and government policies aimed at forcing First Nations to surrender land through treaties, however, brought the Plains Cree and other "horse-culture" nations to ruin by the 1880s. The Canadian government, under the leadership of Sir John A Macdonald, actively withheld rations and other resources in order to force starving Plains people into signing treaties and relocating to reserves. There, Cree existed by farming, ranching and casual labour, and were subjected to further cultural destruction through decades of trauma endured in the residential school system.
Treaties were made with all Cree except the James Bay/Eastern Cree. Though the government made general promises to protect Cree land rights and their traditional way of life, the treaties gave the federal and provincial governments the power to intervene in Cree traditional culture. Government services, health programs and education, including residential schooling, were usually administered through the missionaries and traders until the middle of the 20th century.
Government-backed corporate exploitation of natural resources in the 20th and 21st centuries has brought radical changes in many Cree communities. In the 1970s in Québec, the James Bay Cree successfully negotiated the James Bay and Northern Québec Agreement. The agreement was a response to the James Bay hydroelectric project, which had been undertaken without consultation of the communities it would affect. The project pushed the James Bay Cree to action, and the resulting agreement provided the first step toward self-government. Since then, a series of further agreements between the Cree of Québec, the provincial government and the federal government have followed. The Cree have also been central to UN negotiations, including the United Nations Declaration on the Rights of Indigenous Peoples (2007).
Many registered members of Cree nations no longer live in their reserve communities. For many nations, especially in the James Bay and Plains regions, the portion of registered members living on reserve is very high. For example, the rate of population living on reserve among James Bay Cree averaged 83 per cent in 2015, reaching 96 per cent in the case of Whapmagoostui.
Self-government and economic development are major contemporary goals of the Cree. Cree First Nations across Canada have attempted to negotiate with development corporations and governments. For example, the Lubicon First Nation of Alberta have sued the provincial and federal governments for their share of natural gas revenues and further recognition of treaty rights, while in Manitoba, several Cree nations have reached agreements with the federal and provincial governments, as well as resource companies.
Several Cree leaders have had a national role in furthering the aims of Aboriginal people of Canada, including Assembly of First Nationschiefs Noel Starblanket, Ovide Mercredi, Matthew Coon Come and Perry Bellegarde, and Attawapiskat chief Theresa Spence, who gained national attention for her involvement with the Idle No More movement in 2012 and 2013.
T. Morantz, The White Man's Gonna Getcha (2002)
R. Preston, Cree Narrative: Expressing the Personal Meanings of Events (2002)
D. Mandelbaum, The Plains Cree (1979)
J. Helm, ed, Handbook of North American Indians, vol 6: Subarctic (1981)
D. Ahenakew, Voices of the Plains Cree (1977) |
Iterative Control: LOOP StatementsThe LOOP statement executes a series of statements multiple times. There are 3 forms of LOOP statements: LOOP, WHILE-LOOP, & FOR-LOOP.LOOPThe simplest form of the LOOP statement is the basic loop that encloses a series of statements between the keywords LOOP and END LOOP which is as shown:LOOPsequence_of_statementsEND LOOP;With each of the iteration of the loop, the series of statements is executed, then the control resumes at the top of the loop. If extra processing is undesirable or impossible, you may use an EXIT statement to complete the loop. You may place one or more EXIT statements wherever inside a loop, but nowhere outside a loop. There are 2 forms of EXIT statements: EXIT and EXIT-WHEN. |
Woolly monkeys are found throughout Colombia, Ecuador, Peru, Brazil and parts of Venezuela where they live an arboreal lifestyle. Woolly monkeys have long and very strong prehensile tails which allows them to balance and grip onto branches without having to give up the use of their hands.
There are four different species of woolly monkey found in the South American jungles today. These are the brown woolly monkey (also known as the common woolly monkey), the grey woolly monkey, the Columbian woolly monkey and the silvery woolly monkey. All four of the different woolly monkey species are found in the same regions of South America.
The woolly monkey gets its name from its soft and thick, curled fur which ranges from brown to black to grey, depending on the species of woolly monkey. Woolly monkeys have relatively stocky bodies, with powerful shoulders and hips.
Like many other primate species, woolly monkeys live together in fairly large groups known as troops. The woolly monkey troops contain both male woolly monkeys and female woolly monkeys. The woolly monkey troop is also known to split up into smaller groups when it is time to forage for food.
The woolly monkey is an omnivorous animal, meaning that it feeds on both plants and other animals. Fruit is the primary source of food for woolly monkeys, but they will also eat nuts, seeds, leaves, flowers, nectar, insects and even small rodents and reptiles.
Due to their relatively large size, woolly monkeys have few natural predators within their jungle environment. Large birds of prey such as eagles, are the main predators of the young woolly monkeys, and wildcats such as ocelot and jaguars are the main predators of the adult woolly monkeys. The human is also one of the main predators of the woolly monkey as they are hunted for their meat and fur.
The alpha male woolly monkey will mate with the females in his troop. After a gestation period of between 7 and 8 months, the baby woolly monkey is born. Woolly monkeys tend to only have one baby at a time although twins have been known to occur. The baby woolly monkey clings to it's mothers underside, before climbing up onto her back when it is around a week old. The baby woolly monkey is independent and no longer needs it's mother when it is around 6 months old.
Due to deforestation and therefore habitat loss, the woolly monkey population numbers are drastically decreasing, with the woolly monkey now considered to be an animal species that is vulnerable to extinction. |
The transition to a world with an oxygenated deep ocean occurred between 540 and 420 million years ago, new research suggests.
Researchers attribute the change to an increase in atmospheric O2 to levels comparable to the 21 percent oxygen in the atmosphere today.
This inferred rise comes hundreds of millions of years after the origination of animals, which occurred between 700 and 800 million years ago.
“The oxygenation of the deep ocean and our interpretation of this as the result of a rise in atmospheric O2 was a pretty late event in the context of Earth history,” says Daniel Stolper, an assistant professor of Earth and planetary science at the University of California, Berkeley.
“This is significant because it provides new evidence that the origination of early animals, which required O2 for their metabolisms, may have gone on in a world with an atmosphere that had relatively low oxygen levels compared to today.”
Tracing oxygen’s history
Oxygen has played a key role in the history of Earth, not only because of its importance for organisms that breathe it, but because of its tendency to react, often violently, with other compounds to make iron rust, plants burn, and natural gas explode.
Tracking the concentration of oxygen in the ocean and atmosphere over Earth’s 4.5-billion-year history isn’t easy. For the first 2 billion years, most scientists believe very little oxygen was present in the atmosphere or the oceans.
About 2.5-2.3 billion years ago, however, atmospheric oxygen levels first increased. The geologic effects of this are evident: rocks on land exposed to the atmosphere suddenly began turning red as the iron in them reacted with oxygen to form iron oxides similar to how iron metal rusts.
Earth scientists have calculated that around this time, atmospheric oxygen levels exceeded about a hundred thousandth of today’s level (0.001 percent) for the first time, but still remained too low to oxygenate the deep ocean, which stayed largely anoxic.
By 400 million years ago, fossil charcoal deposits first appear, an indication that atmospheric O2 levels were high enough to support wildfires, which require about 50 to 70 percent of modern oxygen levels, and oxygenate the deep ocean. How atmospheric oxygen levels varied between 2,500 and 400 million years ago is less certain and remains a subject of debate.
“Filling in the history of atmospheric oxygen levels from about 2.5 billion to 400 million years ago has been of great interest given O2‘s central role in numerous geochemical and biological processes. For example, one explanation for why animals show up when they do is because that is about when oxygen levels first approached the high atmospheric concentrations seen today,” Stolper says.
“This explanation requires that the two are causally linked such that the change to near-modern atmospheric O2 levels was an environmental driver for the evolution of our oxygen-requiring predecessors,” he says.
In contrast, some researchers think the two events are largely unrelated. Critical to helping to resolve this debate is pinpointing when atmospheric oxygen levels rose to near modern levels. But past estimates of when this oxygenation occurred range from 800 to 400 million years ago, straddling the period during which animals originated.
‘Submarine’ volcanic eruptions
The researchers hoped to pinpoint a key milestone in Earth’s history: when oxygen levels became high enough—about 10 to 50 percent of today’s level—to oxygenate the deep ocean. Their approach is based on looking at the oxidation state of iron in igneous rocks formed undersea (referred to as “submarine”) volcanic eruptions, which produce “pillows” and massive flows of basalt as the molten rock extrudes from ocean ridges.
Critically, after eruption, seawater circulates through the rocks. Today, these circulating fluids contain oxygen and oxidize the iron in basalts. But in a world with deep-oceans devoid of O2, they expected little change in the oxidation state of iron in the basalts after eruption.
“Our idea was to study the history of the oxidation state of iron in these basalts and see if we could pinpoint when the iron began to show signs of oxidation and thus when the deep ocean first started to contain appreciable amounts of dissolved O2,” Stolper says.
To do this, they compiled more than 1,000 published measurements of the oxidation state of iron from ancient submarine basalts.
The researchers found that the basaltic iron only becomes significantly oxidized relative to magmatic values between about 540 and 420 million years ago, hundreds of millions of years after the origination of animals. They attribute this change to the rise in atmospheric O2 levels to near modern levels. This finding is consistent with some but not all histories of atmospheric and oceanic O2 concentrations.
“This work indicates that an increase in atmospheric O2 to levels sufficient to oxygenate the deep ocean and create a world similar to that seen today was not necessary for the emergence of animals,” Stolper says.
“Additionally, the submarine basalt record provides a new, quantitative window into the geochemical state of the deep ocean hundreds of millions to billions of years ago,” he says.
The researchers report their findings in the journal Nature.
Source: UC Berkeley |
Diagnosis Code G43.A
Information for Patients
Nausea and Vomiting
Also called: Emesis
Nausea is an uneasy or unsettled feeling in the stomach together with an urge to vomit. Nausea and vomiting, or throwing up, are not diseases. They can be symptoms of many different conditions. These include morning sickness during pregnancy, infections, migraine headaches, motion sickness, food poisoning, cancer chemotherapy or other medicines.
For vomiting in children and adults, avoid solid foods until vomiting has stopped for at least six hours. Then work back to a normal diet. Drink small amounts of clear liquids to avoid dehydration.
Nausea and vomiting are common. Usually, they are not serious. You should see a doctor immediately if you suspect poisoning or if you have
- Vomited for longer than 24 hours
- Blood in the vomit
- Severe abdominal pain
- Headache and stiff neck
- Signs of dehydration, such as dry mouth, infrequent urination or dark urine
- Bland diet
- Diet - clear liquid
- Nausea and vomiting
- When you have nausea and vomiting
Cyclic vomiting syndrome Cyclic vomiting syndrome is a disorder that causes recurrent episodes of nausea, vomiting, and tiredness (lethargy). This condition is diagnosed most often in young children, but it can affect people of any age.The episodes of nausea, vomiting, and lethargy last anywhere from an hour to 10 days. An affected person may vomit several times per hour, potentially leading to a dangerous loss of fluids (dehydration). Additional symptoms can include unusually pale skin (pallor), abdominal pain, diarrhea, headache, fever, and an increased sensitivity to light (photophobia) or to sound (phonophobia). In most affected people, the signs and symptoms of each attack are quite similar. These attacks can be debilitating, making it difficult for an affected person to go to work or school.Episodes of nausea, vomiting, and lethargy can occur regularly or apparently at random, or can be triggered by a variety of factors. The most common triggers are emotional excitement and infections. Other triggers can include periods without eating (fasting), temperature extremes, lack of sleep, overexertion, allergies, ingesting certain foods or alcohol, and menstruation.If the condition is not treated, episodes usually occur four to 12 times per year. Between attacks, vomiting is absent, and nausea is either absent or much reduced. However, many affected people experience other symptoms during and between episodes, including pain, lethargy, digestive disorders such as gastroesophageal reflux and irritable bowel syndrome, and fainting spells (syncope). People with cyclic vomiting syndrome are also more likely than people without the disorder to experience depression, anxiety, and panic disorder. It is unclear whether these health conditions are directly related to nausea and vomiting.Cyclic vomiting syndrome is often considered to be a variant of migraines, which are severe headaches often associated with pain, nausea, vomiting, and extreme sensitivity to light and sound. Cyclic vomiting syndrome is likely the same as or closely related to a condition called abdominal migraine, which is characterized by attacks of stomach pain and cramping. Attacks of nausea, vomiting, or abdominal pain in childhood may be replaced by migraine headaches as an affected person gets older. Many people with cyclic vomiting syndrome or abdominal migraine have a family history of migraines.Most people with cyclic vomiting syndrome have normal intelligence, although some affected people have developmental delay or intellectual disability. Autism spectrum disorders, which affect communication and social interaction, have also been associated with cyclic vomiting syndrome. Additionally, muscle weakness (myopathy) and seizures are possible. People with any of these additional features are said to have cyclic vomiting syndrome plus. |
The World Bank has released a new report highlighting the fact that air pollution costs world governments billions upon billions every year and ranks among the leading causes of death worldwide.
The estimates — drawn from a number of sources, including the World Health Organization’s most recently completed data sets compiled in 2013 — can for the first time begin to examine the overall welfare cost of air pollution.
Specifically, researches studied the amount of money that world governments must spend on health emergencies, long term illnesses and chronic conditions caused by air pollution. They also took into account missed work and unemployment subsidies.
The report finds that, in terms of the economy, the burden is extremely high.
To be sure, some countries come out of this analysis relatively well off. For example, Iceland only loses $3 million of its gross domestic product to air pollution. Given that the country has a relatively small population and a slight industrial profile, that’s probably not that surprising though.
Other countries, like Liberia, performed relatively well despite their low levels of economic development. Several African nations also have low overall air pollution impact costs. Despite mid-to-high populations, infrastructure is comparatively low density in places like Malawi and Zimbabwe, so perhaps this isn’t that surprising either.
It’s when we get to rapidly developing and “developed” nations that the costs really start to mount up. For example, the United States is estimated to lose $45 billion every year due to air pollution, while the UK loses $7.6 billion annually. Germany comes in at $18 billion, though it will be interesting to see how the country’s renewable energy strategy might alter that figure over the coming years.
China, one of the most rapidly developing nations in the world, is estimated to be losing a staggering 10 percent of its overall GDP, while India is not far behind at roughly eight percent.
Financial losses will, however, seem trivial when we look at the potential human cost of air pollution.
The World Bank estimates that global air pollution kills roughly five and a half million people every year, or to put that another way: it will kill one out of every ten people worldwide.
Air pollution is now the fourth leading cause of premature death in the world and, as the Guardian points out, it actually causes “six times the number of deaths caused by malaria,” a fact that highlights the threat of air pollution most starkly. |
Sat navs are a fascinating offshoot of both the US military's desire to equip its units with the facility to find out their position and the ability to display maps on a computer screen.
But though the devices are extremely popular, the algorithms that they use in order to provide the shortest distance route to your destination are less well known.
There are two major algorithms that come into play with sat-navs. The first is perhaps the simplest: the ability of the unit to use the GPS (global positioning system) satellites to work out where in the world the unit is situated.
The second is rather more complicated: the ability of the sat-nav to determine the shortest distance from point A – where you are – to point B – where you'd like to be. There are other algorithms in play, mostly dealing with the visual display of the route to take, but these two algorithms are the most important.
The GPS satellite network started out as a US Air Force system to help determine the position of any receiver to an accuracy of 15 metres. Following the downing of the civilian KAL 007 airliner after it drifted into Soviet restricted airspace, Ronald Reagan promised to make the system available for civilian use once it became operational.
This happened in late 1993, and the unencrypted civilian signal was deliberately downgraded so that GPS units would only be accurate to about 100m. This limitation (known as Selective Availability) was removed in 2000.
The positioning algorithm is fairly simple. Currently there are 30 satellites spread out in medium Earth orbit, each transmitting the same information. The messages consist of three main pieces of data: the exact time the message was transmitted, precise orbital data for the satellite (known as the ephemeris) and the overall system health.
The GPS unit listens for these messages and interprets them. From the time of the message, the GPS unit can work out how long the message took to reach the unit, and, from that and the known speed of light, how far away the satellite is. From the ephemeris data, the unit can work out the direction to the satellite.
Calculating the position
Using just the messages from one satellite, the GPS unit is going to be somewhere on the surface of a virtual sphere centred on that satellite. That's fairly interesting to know, but not very helpful.
So the GPS unit listens for messages from other satellites. Using the messages from two satellites, the GPS unit can work out its position to be somewhere on the circle that forms the intersection of the two spheres centred on each satellite.
If you think about it geometrically, either the spheres don't intersect at all, or they intersect at just one point (the spheres just manage to 'touch'), or, in the more general case, they intersect as a circle. Think of soap bubbles joined together. Interesting to know, but again pretty useless.
Using three satellites, the GPS can calculate its position to be at one of two points on that circle from the previous case. Again, thinking geometrically, the intersection between a sphere and a circle is going to be either non-existent, a single point or two points.
So GPS units use the messages from at least four different satellites to resolve their location to a single point. GPS satellites are positioned in orbit so that from any point of Earth about 10 satellites will always be 'visible' in the sky, an ample number from which to calculate the position of a GPS receiver. The position of the receiver, using just the GPS satellites, is calculated to within about 15m.
The reason for the comparatively large error is due to many factors, including the atmosphere (light travels slightly slower in air than in the vacuum of space), any errors in the clocks involved, bouncing of the GPS signals off buildings and so on. Yet, as we all know, a GPS unit seems to be way more accurate than that.
The reason is that terrestrial sat-navs do not rely exclusively on GPS satellite signals: they also make use of other signals such as those from mobile phone towers and the like to refine their location to within a few metres.
If the GPS receiver is in a car and forms part of a sat-nav system, the unit will also make use of further information from the car itself, such as speed, distance travelled, acceleration and so on. This helps in urban environments where the GPS signal can be blocked by bridges, or by being in a tunnel, or messed up by being reflected off buildings and the like.
Of course, a further refinement is that, usually, the car is on a road, and so the sat-nav can 'fix' the location of the car to a road on its internal map. |
A new study suggests that not only do smaller islands sustain fewer species than large ones, but the food chains are also smaller.
This implies that plant and animal communities on small islands may work differently from those on large ones.Top predators the first to goWorking across a set of 20 islands off the Finnish coast, a group of Finnish scientists found that a disproportionate number of small islands were lacking the highest levels of the food chain. The results are freshly online in the journal Ecography.
Advertisement"Ecologists have known for decades that less area means fewer species", explains Tomas Roslin, who spearheaded the current analyses. "What we show is that the decrease in species richness with decreasing area gets steeper when you climb up the food chain. That means that when you move towards smaller island size, you run out of top predators before you run out of intermediate predators, and that you lose the last plant-eaters before you lose the last plant." The study comes with broad implications for a world shattered by human activities.
"While we worked on a set of real islands, you can probably think of habitat fragments as 'islands' in a broader sense", says Tomas. "What our results then mean is that if we keep splitting natural habitats into smaller and smaller pieces, we may not only lose a lot of species from the resultant fragments, but also change the structure and functioning of local food webs."Knowing your species the key to insightsTo explore the effects of island size, the research team focused on islands spanning a hundred-fold range in area. On each of these islands, the team took samples of local food chains consisting of four levels: plants, predators feeding on the herbivores, and top predators feeding on the predators themselves. Among predators, the researchers targeted a specific group, i.e. parasitic wasps. "To test ideas about food chain length, you really cannot deal with raw counts of species - instead, you need to know which species form actual feeding chains" says Gergely Várkonyi, an international expert on wasps involved in the project. "Among the wasps encountered on these islands, we were able to pick out the species truly dependent on the lepidopteran herbivores. As we see it, knowing not just what the species are but what they do in their lives is the key to sensible ecology", emphasizes Gergely.Hundreds of species examined"What is unique about our study is that we were able to look at patterns at the level of large species pools across the islands", explains Marko Nieminen, who spent three long summers boating around the islands, sampling insects by light and bait traps. "Where other people have looked at effects of island size on restricted numbers of species or restricted levels in food chains, we did the full thing across four levels", he specifies. "Overall, we dealt with 200 species of plants, 415 species of lepidopteran herbivores, 42 species of parasitic wasps attacking herbivores and 7 species of wasps attacking parasitic wasps."Deliberately keeping things simple"In choosing the islands, we deliberately went for a simple system" says Marko. "Our islands were essentially smallish pieces of rocks with some forest and heathland on them. Historically, they all rose from the sea just some millennia ago, after being submerged and scraped clean of life by the last ice age. This similarity in structure and history allowed us to look at effects of island size, without having to worry about other differences among islands."Maintaining interactions may be trickier than maintaining speciesAll three authors worry about the message laid plain by the study: "What this really suggests is that to save ecological interactions, we may need to conserve much larger areas than for just maintaining e.g. plant diversity. If we keep splitting habitats into ever-smaller pieces, then we will be losing upper links from food chains, and important control functions along with them." |
Natural gas is a hydrocarbon consisting mostly of methane, but also other gases like propane and butane. The majority of natural gas is produced by the extraction of oil and gas, but it can also be created through more natural processes like the decomposition of organic material to create biogas. Natural gas is a very clean-burning fuel, the cleanest of all hydrocarbons with an octane rating of over 120, which makes it a great fuel for transportation.
Over 90% of the natural gas used in the United States is produced domestically, so the use of natural gas in the transportation sector supports local economies and promotes US energy independence. The most common form of natural gas is compressed natural gas (CNG), which is compressed at around 3600 psi to maintain an energy density similar to gasoline. Natural gas can also be liquefied to increase its density and therefore extend the range of a vehicle by cooling the gas to -265 degrees Fahrenheit, and liquefied natural gas (LNG) is used in heavy-duty applications where a lot of fuel is consumed in a short time frame.
In addition to being domestically produced and cleaner burning than gasoline or diesel, natural gas is also abundant and cheap. A gasoline gallon equivalent of natural gas is cheaper on average than gasoline and is expected to remain consistently lower in the near future as estimated by the Energy Information Administration. Natural gas is cheaper, cleaner, and more domestic than gasoline or diesel.
We worked with the Colorado Energy Office to develop a great website, Refuel Colorado, that covers the details of each alternative fuel, including:
Interested in Natural Gas? – Check out Colorado’s Natural Gas Vehicle Coalition. |
Before the mid-1960s, computers were extremely expensive and used only for special-purpose tasks. A simple batch processing arrangement ran only a single "job" at a time, one after another. But during the 1960s faster and more affordable computers became available. With this extra processing power, computers would sometimes sit idle, without jobs to run.
Programming languages in the batch programming era tended to be designed, like the machines on which they ran, for specific purposes (such as scientific formula calculations or business data processing or eventually for text editing). Since even the newer, less expensive machines were still major investments, there was a strong tendency to consider efficiency to be the most important feature of a language. In general, these specialized languages were difficult to use and had widely disparate syntax.
As prices decreased, the possibility of sharing computer access began to move from research labs to commercial use. Newer computer systems supported time-sharing, a system which allows multiple users or processes to use the CPU and memory. In such a system the operating system alternates between running processes, giving each one running time on the CPU before switching to another. The machines had become fast enough that most users could feel they had the machine all to themselves. In theory, timesharing reduced the cost of computing tremendously, as a single machine could be shared among (up to) hundreds of users.
Early years: the mini-computer era
The original BASIC language was designed in 1963 by John Kemeny and Thomas Kurtz and implemented by a team of Dartmouth students under their direction. BASIC was designed to allow students to write programs for the Dartmouth Time-Sharing System. It was intended to address the complexity issues of older languages with a new language design specifically for the new class of users that time-sharing systems allowed—that is, a less technical user who did not have the mathematical background of the more traditional users and was not interested in acquiring it. Being able to use a computer to support teaching and research was quite novel at the time. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as Dartmouth BASIC.
The eight design principles of BASIC were:
1. Be easy for beginners to use.
2. Be a general-purpose programming language.
3. Allow advanced features to be added for experts (while keeping the language simple for beginners).
4. Be interactive.
5. Provide clear and friendly error messages.
6. Respond quickly for small programs.
7. Not to require an understanding of computer hardware.
8. Shield the user from the operating system.
The language was based partly on the FORTRAN II and partly on the ALGOL 60, with additions to make it suitable for timesharing. (The features of other time-sharing systems such as JOSS and CORC, and to a lesser extent LISP, were also considered.) It had been preceded by other teaching-language experiments at Dartmouth such as the DARSIMCO (1956) and DOPE (1962 implementations of SAP and DART (1963) which was a simplified FORTRAN II). Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language and full string functionality being added by 1965. BASIC was first implemented on the GE-265 mainframe which supported multiple terminals. Contrary to popular belief, it was a compiled language at the time of its introduction. It was also quite efficient, beating FORTRAN II and ALGOL 60 implementations on the 265 at several fairly computationally intensive (at the time) programming problems such as numerical integration by Simpson's Rule.
The designers of the language decided to make the compiler available free of charge so that the language would become widespread. They also made it available to high schools in the Dartmouth area and put a considerable amount of effort into promoting the language. As a result, knowledge of BASIC became relatively widespread (for a computer language) and BASIC was implemented by a number of manufacturers, becoming fairly popular on newer minicomputers like the DEC PDP series and the Data General Nova. The BASIC language was also central to the HP Time-Shared BASIC system in the late 1960s and early 1970s. In these instances the language tended to be implemented as an interpreter, instead of (or in addition to) a compiler.
Several years after its release, highly-respected computer professionals, notably Edsger W. Dijkstra, expressed their opinions that the use of GOTO statements, which existed in many languages including BASIC, promoted poor programming practices. Some have also derided BASIC as too slow (most interpreted versions are slower than equivalent compiled versions) or too simple (many versions, especially for |
Louisa Newlin taught high school English for more than 40 years. She wrote "Nice Guys Finish Dead: Teaching Henry IV, Part I in High School" for the Shakespeare Set Free series. She leads workshops on sonnets for teachers.
Gigi Bradford, former director of the NEA Literature Program and Folger Poetry Series, currently teaches the Folger's "Shakspeare's Sisters" seminar.
This lesson allows for students to work together to write an original sonnet.
This lesson will take 1 x forty-five minute class period.
What's On for Today and Why
Composing a sonnet as a class or a group can be an effective way of reinforcing understanding of the sonnet’s pattern and of paving the way for writing individual sonnets. Starting with a rhyme scheme and working “backwards,” adjusting the lines to make sense often yields surprisingly coherent results. This is a good exercise in collaborative learning—and is also noisy and fun.
What You Need
Chalkboard or sheets of poster paper/post-it sheets (large) on an easel.
Chalk or Markers.
Tolerance for the noise that often accompanies creativity.
What To Do
1. Ask a student to write the rhyme scheme of the Shakespearean sonnet on the board, vertically: abab cdcd efef gg. Number the lines.
2. Explain the process by which the students will create a sonnet. First, come up with two pairs of rhyming one-syllable words for the first quatrain (day/dark pray/spark, for example) and place them at the ends of the first 4 lines.
3. Work with the students to compose iambic pentameter lines to precede each of the end rhymes. One person is the scribe who writes the lines on the board. The lines may be nonsense at first, but the group can work to tweak them into making sense (In the process, the end rhymes may be altered).
4. The same process is applied to the second quatrain, the third, and the couplet.
5. Once there are 14 lines on the board, ask students to collectively edit the result.
6. Read the group sonnet chorally.
7. Have students start writing individual sonnets of their own, drawing on their journal writing of the previous classes/lessons for subject matter or theme. These can be the basis for the “suggested homework” at the end of Lesson 10.
How Did It Go?
Did the class cooperate in the exercise?
Did students demonstrate an understanding of iambic pentameter and of the rhyme scheme of the Shakespearean sonnet?
Do they understand the internal structure of a sonnet, and that sonnets can be written about a wide range of topics?
Were they able to compose a poem that hangs together and which uses natural sounding, unforced language?
If you used this lesson, we would like to hear how it went and about any adaptations you made to suit the needs of YOUR students.
Thank you for this article. tod converter
abe April 12, 2014 2:07 AM
Penelope November 29, 2013 1:18 AM
Andrew May 31, 2013 9:00 AM |
What Are Sarcoids?
Equine sarcoids are the most common skin tumour in the horse, accounting for 40% of all equine cancers. They are locally invasive tumours which are variable in appearance, location and rate of growth.
Sarcoids are caused by Bovine Papilloma Virus, which may be spread by flies. Not all horses that are exposed to the virus develop sarcoids, but, it appears that some horses are more susceptible than others.
This also explains why horses that have sarcoids will stay susceptible and are more likely to grow additional sarcoids. People are often concerned about whether sarcoids are contagious because of the viral cause. There is yet to be any proof found wich shows that horse to horse contact can cause horses to develop sarcoids. Sarcoids mainly occur around the head and in the groin and axilla area.
They seldom affect a horse’s usefulness, unless they are in a position likely to be abraded by tack. They do not usually resolve on their own, and most horses develop multiple sarcoids.
Types of Sarcoids
- Nodular sarcoids are firm spherical nodules found under normal-looking skin. They can be variable in size and can become ulcerated.
- Verrucous sarcoids are slow-growing, flat scaly tumours that look like warts. They can also look like ringworm or scars.
- Fibroblastic sarcoids are fleshy lumps which often ulcerate because they grow rapidly. They often occur in clusters and have an irregular shape.
- Occult sarcoids are flat hairless patches that occur mostly around the eyes, mouth and neck. Radiation therapy; Iridium wires are inserted into a sarcoid to destroy it. It is the most effective treatment method, but it is costly and not widely available.
- Malignant sarcoids are highly aggressive, and these spread via lymphatic vessels, which results in lines of sarcoids spreading from the original sarcoid.
Sarcoids can, on some occasions, be confused with other tumours. However, a biopsy can give more information about what kind of tumour your horse has. Taking a small sample of a sarcoid can cause the lump to start growing rapidly. Because sarcoids are the most likely diagnosis for these lumps, your vet will remove the sarcoid completely and possibly send the tissue off to a lab for histopathology, which can determine if the lump was a sarcoid
Treatment or removal of sarcoids are not always necessary, but, when treatment is required, it can prove difficult and possibly expensive. Sarcoids can regrow after treatment, and no treatment as of yet is 100% successful.
Success rates vary between types of treatment. It is important to note that every treatment failure reduces the success rates of future attempts.
- Ligation is where the sarcoid blood supply is cut off, causing it to shrink and drop off over time. Recurrence rates are more than 50%.
- Creams; there are various types, some more irritant to the skin than others and some have to be applied by your vet. They have a success rate of 40-60%.
- Injections; A chemotherapy drug injected into nodular and fibroblastic sarcoids causing the lesions to regress but can cause local swelling and sometimes injections to need repeating.
-Radiation therapy; Iridium wires are inserted into a sarcoid to destroy it. It is the most effective treatment method, but it is costly and not widely available.
- Laser Removal; is a surgical instrument that cuts into and vaporizes soft tissue with minimal bleeding. The wound that the horse is left with heals very well on its own. This treatment has one of the highest success rates, with 80-90 % of horses not regrowing the sarcoid that was treated and, 70% of horses did not develop new sarcoids.
On the rare occasion that sarcoids regress on their own, these horses seem to develop immunity and do not develop further sarcoids. Please talk to your vet for more information on treatment options.
Figure 6 Nodular sarcoid and verrucous sarcoid in front of the sheath
Figure 2 Nodular sarcoid, just before laser removal
Figure 3 Same lesion, just after laser removal. Applied with spray
Figure 4 Initial stage of healing, a crust formed
Figure 5 Same lesion, almost healed completely
Laser sarcoid removal clinics:
Take advantage of significant savings on offer at one of our laser sarcoid removal clinic days.
A deposit is required to secure your booking. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish. |
Bivalve lifestyles and ecology
Modern bivalves live in a variety of depositional settings in fresh, brackish, and marine water. Fossil bivalves inhabited the same habitats as modern bivalves. Marine, brackish, and freshwater origins for fossil bivalves are usually determined by their associations with other fossils more diagnostic of marine, brackish or freshwater. For example, fossil bivalves found in the same rock layer as fossil corals, are likely marine bivalves because corals only live in marine conditions. Sedimentology of the rock strata encasing bivalve fossils is also used to interpret the type of environment (e.g., river, lake, estuary, marine shelf, etc.) in which the original sediment was deposited. Almost all of the bivalve fossils found in the Paleozoic rocks of Kentucky were marine (or brackish) bivalves.
Modern bivalves can be free-swimming, live on or attached to another organism or a substrate (epifaunal), or live in the substrate (infaunal). Some infaunal and epifauntal bivalves attach to the substrate or other objects by strong, thread-like features called byssus. A small gap, called a byssal gape may occur along the commissure of both valves in bivalves with byssus (Cox and others, 1969; Carter and others, 2012). Neither the foot or byssus are preserved as fossils, but the gaps in shells may be preserved as indicators of their position when the bivalve was alive.
In some cases, fossil bivalves are found in life position, especially those that had infaunal lifestyles. Infaunal bivalves are already partly buried, so are already part way to fossilization. In most cases, however, fossil bivalve shells have been transported to where they were deposited and their original lifestyle or orientation in life must be interpreted. Details of well-preserved fossil shells can be used to determine if fossil bivalves had byssus, a foot, siphons, and other soft parts, which are partly related to certain lifestyles. Studies of modern bivalves also show that many shell shapes are adapted to free-swimming, epifaunal, semi-infaunal, or infaunal lifestyles (Stanley, 1970, 1972). Hence, modern shell shapes (profiles, outlines) can sometimes be used to infer the lifestyles of fossil bivalves with similar shapes (e.g., Pojeta, 1971; Hoare and others, 1979). |
Old landfill sites promote biodiversity, study shows
Brownfield sites are often promoted as suitable locations for development on the perception that protecting the 'green belt' limits damage to biodiversity.
Yet a new study shows that such sites are in fact of great importance to wildlife, providing scarce and species-rich habitats with little or no disturbance.
The research, led by Callum Macgregor and colleagues at the University of Hull, UKCEH and Butterfly Conservation, investigated the species richness of birds, plants and insects in relation to historical landfill sites, combining data from three national-scale UK biological monitoring schemes, including the UK Butterfly Monitoring Scheme.
Brownfield land harbours a rich array of bird, insect and plant species, with its biodiversity often under-appreciated (Richard Croft / commons.wikimedia.org).
The results demonstrate that species richness is higher for all three groups in landscapes containing ex-landfill sites, with these figures increasing as the area of ex-landfill increases.
The team found that for birds and insects, the species richness declines over time once landfill sites have closed, while for plants species richness increases.
The research provides evidence that the repurposing or development of brownfield sites may have unintended negative consequences for biodiversity,and that this should only occur on smaller sites, or those in areas where a high density of such brownfield sites exists.
Macgregor C J, Bunting M J, Deutz P, Bourn N A, Roy D B & Mayes W M. 2022. Brownfield sites promote biodiversity at a landscape scale. Science of The Total Environment 804: 150162. DOI: 10.1016/j.scitotenv.2021.150162 |
More Detail Below. Or Get Factors For Another Number...
These are the integers which can be evenly divided into 91; they can be expressed as either individual factors or as factor pairs. In this case, we present them both ways. This is mathematical decomposition of a particular number. While usually a positive integer, take note of the comments below about negative numbers.
A prime factorization is the result of factoring a number into a set of components which every member is a prime number. This is generally written by showing 91 as a product of its prime factors. For 91, this result would be:91 = 7 x 13
Yes! 91 is a composite number. It is the product of two positive numbers other than 1 and itself.
No! 91 is not a square number. The square root of this number (9.54) is not an integer.
This number has 4 factors: 1, 7, 13, 91
More specifically, shown as pairs...
(1*91) (7*13) (13*7) (91*1)
The greatest common factor of two numbers can be determined by comparing the prime factorization (factorisation in some texts) of the two numbers and taking the highest common prime factor. If there is no common factor, the gcf is 1. This is also referred to as a highest common factor and is part of the common prime factors of two numbers. It is the largest factor (largest number) the two numbers share as a prime factor. The least common factor (smallest number in common) of any pair of integers is 1.
We have a least common multiple calculator here The solution is the lowest common multiple of two numbers.
A factor tree is a graphic representation of the possible factors of a numbers and their sub-factors. It is designed to simplify factorization. It is created by finding the factors of a number, then finding the factors of the factors of a number. The process continues recursively until you've derived a bunch of prime factors, which is the the prime factorization of the original number. In constructing the tree, be sure to remember the second item in a factor pair.
To find the factors of -91, find all the positive factors (see above) and then duplicate them by adding a minus sign before each one (effectively multiplying them by -1). This addresses negative factors. (handling negative integers)
Divisibility refers to a given integer number being divisible for a given divisor. The divisibility rule are a shorthand system to determined what is or isn't divisible. This includes rules about odd number and even number factors. This example is intended to allow the student to estimate the status of a given number without computation. |
Preschool Math Worksheets Worksheet Homework Year Christmas Activities Ks2 Addition And
|Include In Article|
Preschool Maths Worksheet
|Published Date||Friday , October 09th 2020|
Worksheets. Friday , October 09th 2020.
Quote from Preschool Maths Worksheet :
Are you the parent of a toddler? If you are, you may be looking to prepare your child for preschool from home. If you are, you will soon find that there are a number of different approaches that you can take. For instance, you can prepare your child for social interaction by setting up play dates with other children, you can have arts and crafts sessions, and so much more. Preschool places a relatively large focus on education; therefore, you may want to do the same. This is easy with preschool worksheets. When it comes to using preschool worksheets, you will find that you have a number of different options. For instance, you can purchase preschool workbooks for your child. Preschool workbooks are nice, as they are a relatively large collection of individual preschool worksheets. You also have the option of using printable preschool worksheets. These printable preschool worksheets can be ones that you find available online or ones that you make on your computer yourself.
There are many opportunities to teach your child how to count. You probably already have books with numbers and pictures, and you can count things with your child all the time. There are counting games and blocks with numbers on them, wall charts and a wide variety of tools to help you teach your child the basic principles of math. Mathematics worksheets can help you take that initial learning further to introduce the basic principles of math to your child, at a stage in their lives where they are eager to learn and able to absorb new information quickly and easily. By the age of three, your child is ready to move onto mathematics worksheets. This does not mean that you should stop playing counting and number games with your child; it just adds another tool to your toolbox. Worksheets help to bring some structure into a child’s education using a systematic teaching method, particularly important with math, which follows a natural progression.
The addition, subtraction & number counting worksheets are meant for improving & developing the IQ skills of the kids, while English comprehension & grammar worksheets are provided to skill students at constructing error free sentences. The 1st grade worksheets can also used by parents to bridge between kindergarten lessons & 2nd grade program. It happens on many occasions that children forget or feel unable to recollect the lessons learnt at the previous grade. In such situations, 1st grade worksheets become indispensable documents for parents as well as students.
1st grade worksheets are used for helping kids learning in the first grade in primary schools. These worksheets are offered by many charitable & commercial organizations through their internet portals.
Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner. |
I must say that you are tired of learning glycolysis because you didn’t find it in a simple way. Here I am making it easy with each step with a picture and then all of the steps are summed in a single picture.
How can we define GLYCOLYSIS?
“Glycolysis is a metabolic pathway in which one molecule of glucose is broken down into 2 molecules of pyruvate with the release of energy in the form of ATP and NADH.”
Before going to glycolysis you should know some terms in advance so that it is easy to incoming details.
After the formation of pyruvate, the glycolysis will move further to the next steps (Kreb’s cycle) in the presence of oxygen or it will stop after the formation of lactate from pyruvate.
Up to this step, it occurs in all cells but further steps depend on the type and kind of glycolysis.
What are these kinds of glycolysis?
These are either Aerobic glycolysis or Anaerobic glycolysis.
i. Aerobic glycolysis:
Aerobic glycolysis is the glycolysis that moves to a further step (Kreb cycle) if the oxygen and mitochondria are available and will produce more ATP.
ii. Anaerobic glycolysis:
As the name indicates glycolysis which does not proceed further because of the absence of oxygen and mitochondria. Pyruvate will change to lactate and will repeat the process which will be discussed later.
In some cases even the mitochondria are available but still, anaerobic glycolysis takes over because of severe hypoxia (lack of oxygen). Cells like RBC, Renal medulla, eye lens, and testes normally do anaerobic glycolysis but sometimes due to severe hypoxia (lack of oxygen) the cell switch to anaerobic glycolysis.
Sometimes during severe exercise, the oxygen demand is very high though the respiratory system is more active still the demand cannot be met then the cell starts anaerobic glycolysis.
As glycolysis requires glucose so now we will discuss the transport of the oxygen to the cell?
Transport of glucose:
The glucose, fructose, and galactose from GIT (Gastrointestinal tract) are absorbed and move to portal circulation which is transported to the liver. The liver cells (hepatocytes) will store most of the glucose during eating to prevent the body from hyperglycemia and will release a small amount to general circulation. The hepatocytes also convert fructose and galactose to glucose.
But here we are talking about glucose transport to the cell. For this, we have to know about the glucose concentration between extracellular fluid and blood, and the concentration between extracellular fluid and cell.
The concentration of glucose between extracellular fluid and blood:
The concentration of glucose between extracellular fluid and blood is the same because glucose can move freely between blood and extracellular fluid.
The concentration of glucose between extracellular fluid and cell:
The glucose concentration between extracellular fluid and cell is not the same because glucose cannot move freely to the cell because it needs special transporter to move into the cell.
What are that special transporters?
On the basis of the transporter, the transport system is of two types.
- Facilitated diffusion
- Glucose-NA+-CO Transport
- Facilitated diffusion
This is the type of diffusion that occurs through a special transporter protein called glucose transporter (GLUT).
Can you explain this process?
In the nucleus, there is a gene for glucose transporter (GLUT) which synthesizes mRNA and moves it to the cytoplasm where it synthesizes the glucose transporter (GLUT). After synthesis, it moves to the plasma membrane and attaches to it.
The structure of this protein is like opened hands for catching a ball through which glucose binds. After binding the protein transporter closes as the hands close after catching the ball and moving the glucose inside.
This is how glucose is transported to the cell.
Are there any other types of GLUT?
Yes, there are 6 types of GLUT which are the following.
- GLUT-1 and GLUT-3:
GLUT-1 and GLUT-3 are abundant in most tissues specially C.N.S and RBC. GLUT-1 and 3 freely move the glucose to the cell that’s why neurons have a large number of GLUT 1 and 3. Also, neuron needs a quick and continuous supply of energy which is possible through these carrier proteins.
The name indicates that GLUT-2 is a two-way transporter. For example when the glucose concentration is high in the blood then it moves to the cells like hepatocytes, adipocytes, and muscle cells but when the concentration of glucose is high in these cells then it moves back the glucose to the blood.
GLUT-4 is insulin-dependent when insulin is secreted it will work otherwise it will not work.
GLUT-5 are not glucose transporters, they are primarily fructose transporters. (Fructose is sweeter than glucose and testes love to run their metabolism on fructose). previously we said that fructose and galactose are converted to glucose and then it is transported to blood but some molecules of fructose don’t convert to glucose and are carried to testes and sperm by GLUT-5.
GLUT-7 is present in on smooth Endoplasmic Reticulum (SEM). The gluconeogenesis is carried out in the Smooth Endoplasmic Reticulum, that’s why these transporters are present there.
Steps of glycolysis:
Now here we are going to the real steps of glycolysis.
As you know glucose is transported to the cell and now it is available. The first step in glycolysis is the conversion of glucose into glucose-6-phosphate with the use of one ATP.
Do you know why it is called “glucose 6 phosphate”?
No. Because phosphate is added to carbon no. 6, that’s why now it is called glucose-6-phosphate. The process of adding phosphate to a molecule is called phosphorylation.
From where does that phosphate come?
You must know that ATP has a phosphate group (3 phosphates) so the ATP provides a phosphate to the glucose and itself converted to ADP.
It means we invested energy here?
Exactly, it is an energy investment phase where we invested one ATP. To start a business we must invest cash first and then we will get their profit, the same is here too.
Is there any enzyme that works here?
Of course, this is an important question. The enzyme work here can be hexokinase or glucokinase.
Are both works the same?
Yes, both work the same.
Then why do they have different names?
Let’s explain this.
Glucokinase works on a higher concentration of glucose. And you know glucose concentration is high in liver cells (hepatocytes) so glucose works there.
Why does Glucokinase work on a high concentration of glucose?
Due to high Km and high Vmax.
What are these terms?
Don’t worry, let me explain these two terms.
It is the concentration of substrate at which half of the enzyme is saturated.
Glucokinase has high km due to which it works on the high concentration of glucose to glucose 6 phosphate.
Vmax is the velocity of reaction at which it attains high speed.
Glucokinase has a high Vmax due to which it works on a high concentration of glucose.
Where this enzyme is present in the cell and how it is activated?
Glucokinase is present in a special area (protein) called “Glucokinase regulating protein” in the nucleus of hepatocytes. When the glucose level rises, it moves to the nucleus and activates glucokinase itself.
Opposite to glucokinase Hexokinase works on little glucose concentration. It means it is present in most tissues of the body.
Hexokinase has a high affinity for glucose so it’s quickly become saturated and has low km.
As hexokinase has low km so it means it has also low Vmax because when it is quickly half saturated so its reaction won’t move high in speed making them low Vmax.
Where this enzyme is present in the cell?
This enzyme is present everywhere in the cytoplasm so it doesn’t need such an activation mechanism because it works on little glucose concentration.
Can you tell us how this enzyme secretion is controlled, I mean how does the cell know that these enzymes need to be stopped?
That’s a good question.
After the formation of glucose-6-phosphate, it signals to the enzyme to stop their secretion and activity.
This first step in glycolysis is a one-way pathway, which means this enzyme can’t back this step in reverse.
That’s good and easy.
The second step in glycolysis is the conversion of Glucose-6-phosphate to Fructose-6-phosphate. Carbon numbers are the same in both molecules i.e glucose and fructose, but the difference is only in their orientation of carbon and hydrogen.
There must be an enzyme that is doing this step?
Yes, every step in glycolysis requires an enzyme for conduction. The enzyme which is acting here is phosphoglucose isomerase.
Why there is no investment in ATP?
Because there is no need for phosphorylation of glucose-6-phosphate or Fructose-6-phosphate.
It is easy then, the step where phosphate is added requires ATP.
Exactly, now you got the point.
This step is reversible because the phosphoglucose isomerase enzyme can reverse back this step.
In the third step, Fructose-6-phosphate is converted to Fructose 1, 6 Bisphosphate.
Did you guess anything here?
Oh yes, again ATP is invested here for adding phosphate to Fructose-6-phosphate.
Again can you tell us which enzyme acts here?
Phosphofructo kinase-1 is the enzyme that runs this step.
And do you know why it is called Phosphofructo kinase-1?
Look at the product of this step, Fructose 1, 6 Bisphosphate, and the previous step product, Fructose-6-phosphate.
What did you observe?
Additional phosphate on carbon-1.
That’s your answer, Phosphofructo kinase-1 adds phosphate at carbon-1 that’s why it is called Phosphofructo kinase-1. The “1” in Phosphofructo kinase-1 shows the number of carbon at which it has added phosphate.
How this enzyme is activated?
This enzyme works in many influences and proceeds the reaction in the forward direction.
What are these influences?
Before going to explain this, let me explain the structure of this enzyme. Phosphofructo kinase-1 has special binding points where different molecules bind for stimulation. The diagram shows one point for ATP, the other is for citrate (produce in the Krebs cycle), and the third one is for cAMP (cyclic AMP).
For example, the cell has low ATP and citrate while the concentration of cAMP is high, it means the cell has used a lot of ATP and produced a lot of cAMP which binds to the cAMP point while a low concentration of ATP and citrate binds to their points signaling the enzyme to work.
In contrast, if the level of ATP and citrate is high and the concentration of cAMP is low it means that the cell has enough energy so it will signal the enzyme to stop their action.
So the cAMP is the positive regulator of Phosphofructo kinase-1 because their high level stimulates this enzyme while ATP and citrate are the negative regulators of this enzyme because their high concentration signals the cell to stop their action.
This is the super-regulator and activator of the Phosphofructo kinase-1 enzyme.
Let me ask you?
Do you remember step 3 where Fructose-6 phosphate was converted to Fructose 1, 6 bisphosphate?
Yes, I remembered it.
During this conversion, a very little number of molecules (e.g 2 out 10) is converted to Fructose 2, 6 bisphosphate which superbly activates Phosphofructo kinase-1 to convert Fructose-6-phosphate into Fructose2, 6 bisphosphate very rapidly.
But who converts this to Fructose 2, 6 bisphosphate?
The enzyme. The enzyme which is converting this is very interesting because it has both the adding and removing the ability of phosphate from this molecule. It is like a hermaphrodite character.
Can you tell us in simple words what enzyme actually is?
In simple words this enzyme has two parts, one has kinase activity i.e phosphofructokinase-II (which adds phosphate to a substance i.e phosphorylation) while the other has phosphatase activity i.e phosphofructophosphatase. (which removes phosphate from the molecule).
What is meant by kinase activity and phosphatase activity?
Well, remember, those enzymes which have kinase in their name, meaning that enzyme adds phosphate to the molecule while the phosphatase enzyme removes the phosphate from the molecule. So it is concluded from this that the kinase enzyme proceeds the glycolysis in the forward direction while the phosphatase enzyme stops and reverses the glycolysis.
But how this hermaphrodite enzyme will know which part of this enzyme should work i.e kinase or phosphatase?
For this let me take two conditions of glucose levels in the blood.
Which are these two conditions?
You should guess but by the way, I will tell you.
- A low blood glucose level
- A high blood glucose level
1. Low blood glucose level
When blood glucose level is low so there is less glucose in the blood.
Do you think this is the time for glycolysis?
No, it is obvious if the glucose level is low in the blood then there is less availability of glucose for glycolysis so it is not the time for glycolysis.
Coming to the hormone, do you know which hormone is high during low blood glucose levels?
Glucagon (glucose gone, just for remembering), when the glucose level is low, glucagon is released to convert stored glycogen to glucose.
Okay, listen carefully due to the low level of glucose glucagon binds with their receptor on the plasma membrane and this stimulated receptor is then coupled with G-stimulatory protein in the cell.
The G-stimulatory then stimulates adenyl cyclase which converts ATP to cAMP (cyclic AMP) and the number cAMP in the cell increases. The cAMP in turn activates the protein kinase A enzyme which phosphorylates the hermaphrodite enzyme.
Will both parts of this enzyme be activated?
No, phosphorylation is like pressure on the hermaphrodite enzyme in which the kinase is suppressed and the phosphatase domain which is called phosphofructo phosphatase will be activated.
Now, what will the phosphatase enzyme do?
As already discussed the phosphatase (phosphofructo phosphatase) enzyme removes the phosphate group, so it will remove the phosphate from Fructose 2, 6 phosphates and will convert it back to the step 2 product i.e Fructose-6-phosphate.
After this what will be happened?
As the Fructose 2, 6 phosphates is converting back to Fructose-6-phosphate the concentration of Fructose 2, 6 phosphates will decrease, and when Fructose 2, 6 phosphate decreases there will no activation of phosphofructokinase-1 so the conversion of Fructose-6-phosphate to Fructose 2, 6 phosphate stops and the glycolysis will be arrested.
2. High blood glucose level
When blood glucose level rises the previous process stops i.e glucagon secretion and there is high availability of glucose.
You must know which enzyme is secreted during high glucose levels to release energy?
Yes I know, the enzyme during high blood glucose levels is insulin.
Again it will bind to the receptor on the plasma membrane?
Absolutely you are right, as you are seeing in the picture the insulin is bounded with its receptor on the plasma membrane, and through a series of reactions, it will activate a special type of protein phosphatase which will remove phosphate from the hermaphrodite enzyme.
After this, the phosphatase domain of hermaphrodite which is known as phosphofructokinase-2 is activated.
Is it going to phosphorylate “fructose-6-phosphate” to “fructose 2, 6 bisphosphates”?
Exactly, it will phosphorylate fructose-6-phosphate to fructose 2, 6 bisphosphate which in turn activates the phosphofructokinase-1 and the glycolysis will proceed forward.
In step 4 an enzyme known as Aldolase will come and cut the six-carbon Fructose 1, 6 bisphosphates into two molecules of 3 carbon compounds i.e dihydroxyacetone phosphate (DHAP) and Glyceraldehyde-3-phosphate. The dihydroxyacetone phosphate (DHAP) has phosphate on carbon-1 while Glyceraldehyde-3-phosphate has phosphate on carbon-3 as their name indicates.
Now the reaction will proceed on two sides, left and right as in the picture to produce two pyruvates.
But how? Because both sides have two different types of molecules i.e dihydroxyacetone phosphate (DHAP) and Glyceraldehyde-3-phosphate?
The dihydroxyacetone phosphate (DHAP) is first converted to Glyceraldehyde-3-phosphate by triose phosphate isomerase enzyme so that both sides have the same product for further steps.
Up to these steps glycolysis invested energy in the form of ATP.
From now both sides’ steps will be the same but I am explaining one sidestep. From now it will be energy-getting steps.
In this step, Glyceraldehyde-3-phosphate is converted to 1, 3 bisphosphoglycerate by glyceraldehyde-3-phosphate dehydrogenase enzyme with the conversion of NAD+to NADH+H.
From where the phosphate is received, did we invest ATP again?
No, let’s back to the enzyme works here i.e glyceraldehyde 3-phosphate dehydrogenase. This enzyme has multiple points where different molecules bind. At one point inorganic phosphate binds, the second and third points bind NAD+ and substrate (Glyceraldehyde-3-phosphate).
But both molecules i.e “Glyceraldehyde-3-phosphate and 1, 3 bisphosphoglycerate” are 3 carbon compounds?
Yes, to produce and gain energy.
During this step, a lot of energy is produced which is trapped by NADH (one NADH produces 3 ATP in the electron transport chain in the presence of oxygen and mitochondria) but still there is an energy that binds inorganic phosphate from glyceraldehyde 3-phosphate dehydrogenase to Glyceraldehyde-3-phosphate to make it 1, 3 bisphosphoglycerate.
The 1, 3 bisphosphoglycerate has 2 phosphates on carbon 1 and carbon 3. The enzyme phosphoglycerate kinase in this step removes the phosphate from carbon-1 and produce 3-phosphoglycerate.
But we have studied that kinase adds phosphate to a molecule and here it removed the phosphate?
Excellent question, this enzyme is doing kinase activity with ADP to convert it to ATP, not with 1, 3 bisphosphoglycerate.
And where the phosphate is gone?
Excellent, you are really attending the class. The phosphate is used to convert ADP to ATP in this step so it is a profit step in which we get 2 ATP from both sides of glycolysis as shown in the picture. Up to now the net gain of ATP is zero because we have invested two ATP in the initial steps.
Do you know?
This phosphorylation of ADP to ATP is called substrate-level phosphorylation.
Can you explain more, please?
In this type of phosphorylation, we did not use oxygen, mitochondria, and electron transport chain. Here the substrate 1, 3 bisphosphoglycerate donated phosphate directly to ADP so we call it substrate-level phosphorylation.
Is this necessary?
Much, because some cell doesn’t have mitochondria so they depend on substrate-level phosphorylation though some cells have mitochondria but there is a severe deficiency of oxygen (hypoxia) so still cells depend on substrate-level phosphorylation.
Is there any term opposite to “substrate-level phosphorylation”?
Oxidative phosphorylation is opposite to it. It occurs in the presence of oxygen in mitochondria.
Still, 3 phosphoglycerate has phosphate on carbon-3. Still, there is a chance to make a new phosphate so the cell wants to remove it to make a new ATP. For this, the enzyme phosphoglycerate mutase comes and convert 3-phosphoglycerate to 2-phosphoglycerate to make the phosphate removal easier in the next step.
As in previous step, phosphate was moved from carbon-3 to carbon-2. Here the enzyme Enolase twists the molecule to make the phosphate ready to leave (not removed yet) as shown in the picture. Now this phosphate is called Enol phosphate and the product is called phosphoenolpyruvate.
In this last step, the enol phosphate is removed from phosphoenolpyruvate and is converted into pyruvate by the pyruvate kinase enzyme.
Will you ask again where the phosphate is gone?
Absolutely not, I know this phosphate is used to generate ATP from ADP. A total of 2 ATP is generated in this step and this type of phosphorylation of ADP to ATP is called substrate-level phosphorylation because no oxygen is involved.
Let me give these all steps in one picture so that you can understand better after understanding one by one step.
Do you know which steps are irreversible in glycolysis?
Let me tell you, step-1 (glucose to glucose-6-phosphate) is irreversible it means the enzyme (glucokinase or hexokinase) in this step can’t reverse the reaction.
Step-3 is (Fructose-6-phosphate to Fructose 1, 6 bisphosphate) and step-9 (phosphoenolpyruvate to pyruvate) are also irreversible.
Why it is so important?
It is very important because in Gluconeogenesis these steps are a hurdle to reverse the reaction to produce a new glucose molecule.
Do you know how many ATPs we invested and how many we get?
If you explain it will be good.
Okay, let me explain you in a diagram.
In the picture, step 1 invested one ATP in order to add phosphate to glucose to make it glucose-6-phosphate.
Step 3 also invested one ATP to convert Fructose-6-phosphate to glucose 1, 6 bisphosphate.
These are the energy investment phases.
Now going to energy gain phases and stages.
In step-5 the energy released from the conversion of glyceraldehyde-3-phosphate to 1, 3 bisphosphoglycerate is used to generate NADH (2 NADH from both sides) which will produce 3 ATP (total 6 ATPs) in the next phases if mitochondria are available.
Step-6 (1, 3 bisphosphoglycerate to 3-phosphoglycerate) and step-9 (phosphoenol pyruvate to pyruvate) generate one, one ATP (total of 4 ATPs).
So we invested 2 ATP and gain a net of 8 ATPs in which 4 ATPs are directly obtained while 6 ATPs are indirectly obtained if mitochondria in the cell are available.
What happened to the glycolysis if no mitochondria are available like RBC?
This kind of glycolysis is called anaerobic glycolysis.
In other cells that have mitochondria, the NADH produced in glycolysis is transported to the Kreb’s cycle where it produces ATP and returns back for glycolysis as NAD+ to repeat the process. But RBC has a limited supply of NAD+ which is consumed in glycolysis and also there is no Electron transport chain and citric acid cycle so that after the formation of ATP it reverse back to glycolysis.
It means now NADH is stuck?
Not really, nature created everything perfect. To produce back the NAD+ the pyruvate is converted to lactate by an enzyme lactate dehydrogenase and in the same way, the NADH is converted back to NAD+ so that RBC repeats the glycolysis.
But how the enzyme knows that I have to convert pyruvate to Lactate?
When the concentration of NADH rises in the cell, the enzyme Lactate dehydrogenase knows that it is the time to convert pyruvate to Lactate because lactate dehydrogenase is very sensitive to NADH concentration.
So how much NADH and ATPs are produced in the anaerobic glycolysis of RBC?
The NADH produce in the glycolysis is consumed back so there is no net production of NADH while a net of 2 ATPs is produced in the RBC glycolysis.
Is this amount of energy is enough for RBC?
Yes, this meager amount of energy is enough for RBC to run their metabolic process.
Can you tell us where the lactate then goes?
Well, let me give you a situation. For example, you are doing severe exercise so the cell will shift to anaerobic glycolysis and will produce a lot of lactate. The lactate is then supplied to the heart and liver through blood because these two organs love lactate.
The heart and liver convert it back to pyruvate and will transport it to citric acid cycle and Electron Transport Chain. |
When temperatures drop and days get shorter, trees start to prepare for the cold of the winter. How do different kinds of trees adapt to the cold? Take a closer look at trees and get children to investigate the seasonal changes!
An African-American grandmother interweaves stories of her family’s ancestry and culture as she shows her granddaughter how to weave a traditional Gullah basket.
Teachers, kindergartners and 5th graders share their experiences using PLT activities like Every Tree for Itself, Tree Cookies, Renewable or Not, and Web of Life while on a field trip to Gully Branch tree farm in Georgia.
Trace a tree’s life events using “tree cookies,” the cross sections of trees.
Help students visualize and better understand the function of the inner parts of a tree trunk by creating this easy-to-make visual aid.
Try these teaching ideas to provide students with different learning styles and abilities multiple avenues to acquire and process content.
Project Learning Tree activities are excellent tools to teach life skills. At a summer leadership camp in Georgia, students learned about leadership, teamwork, and volunteerism. |
What is Historic England?
Updated March 2, 2022
Historic England (officially known as the Historic Buildings and Monuments Commission for England) is the nondepartmental public body that looks after England’s historic environment.
Up until 2015, Historic England was commonly known as English Heritage. In that year, English Heritage split into two separate bodies: the English Heritage Trust, which took the original name of the English Heritage, and the newly named Historic England. The former manages the National Heritage Collection while the latter provides planning and conservation services.
Historic England grades homes with Grade I, Grade II and Grade II* designations. Credit: Bruno Martins/Unsplash
Historic England’s purpose is to make sure the threats to heritage are understood so that policies, effort and investment are targeted effectively. They do this through providing advice to homeowners, local planning authorities and government departments on how applications for planning permission or listed building consent can affect the historic environment.
The public body identifies and protects heritage by managing the National Heritage List for England—an official database of state-protected heritage assets that includes all of England’s listed buildings. (A building is “listed” when it is of special
architectural or historic interest considered to be of national importance and therefore worth protecting.) It also publishes a Heritage at Risk register and provides grants for heritage at risk.
Historic England’s system of grading historic buildings is as follows: Grade II listed buildings are of special interest and the most common, with 91.7% of all listed buildings being in this class; Grade II* buildings are particularly important buildings of more than special interest, with only 5.8% of listed buildings being Grade II*; and Grade I buildings are of exceptional interest, with just 2.5% of listed buildings being Grade I. |
We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Theallergic reactions They occur when our body recognizes a substance, called an allergen, as bad for health. This contact causes an exaggerated response that appears in various areas of the body.
The allergen is often a type of medicine that is taken to fight a disease or alleviate an ailment. Between 5% and 10% of the most common adverse drug reactions are allergic, which means that the patient's defense system overreacts to the drug.
Allergic drug reactions occur because the child's or adult's body reacts against the medicine that has been taken. An allergy never occurs in the first taking of the medicine, but certain doses must be taken for the reaction to take place.
The allergic reactions Drugs have several symptoms, the most frequent being urticaria, which appears shortly after the allergy occurs, and the rash. Fever may also occur.
The most serious symptoms that can occur are attacks of anaphylaxis or worrisome skin reactions. In addition, the child or adult may have vomiting, diarrhea, rhinoconjunctivitis or breathing difficulties.
To do a study of drug allergy, the symptoms that have appeared after have the drug, the composition of the medicine, what they were given for, how long it took for symptoms to appear, how long they lasted, and whether they went away on their own or needed treatment.
If the reaction was severe and the responsible drug is known, the diagnosis is made without testing, only by medical history. But if the reaction was not very serious or several medicines,an allergological study should be carried out, the usual allergy tests.
Allergic drug reactions do not have a specific treatment. The only possible treatment, once diagnosed, is to prevent the child from taking the drug in question and the other drugs from the same family. You can also resort to desensitization, which consists of administering a medicine to a patient who is allergic.
Patricia garcia. Editor of our site
You can read more articles similar to Drug allergy in children, in the category of Medications on site. |
What Does ASTM B117 Mean?
The ASTM B117 is a standard test carried out to determine the corrosive effect of salt on metallic objects. It is done by spraying salt on a specimen housed in a closed chamber. This is an accelerated form for atmospheric corrosion testing. In this test, the corrosive atmosphere is introduced, allowing the test to be completed in less time than these corrosive processes would naturally occur. This is because conditions in this test are normally harsher than the ones present in the natural environment.
ASTBM B117 is also known as salt spray testing or fog testing.
Corrosionpedia Explains ASTM B117
In an ASTM B117 test, a metallic specimen is housed in a controlled chamber, and is sprayed at different angles and locations with a salt solution. The salt (NACl) concentration is usually higher than in the atmosphere and ranges from 3.5-20 percent. The chambers come in different sizes, and designs range from small sizes to walk-in rooms capable of handling large specimens.
A humidifying tower with hot, de-ionized water is used to create hot, humid air. This is achieved by bubbling compressed air through the tower. The salt solution mixes with the hot, humid air at the nozzle and atomizes to form a corrosive fog. The relative humidity created by this fog is 100 percent. When lower humidity conditions are desired, air is blown into the chamber’s exposure zone.
This test might not provide the exact same results as would be produced by similar conditions in the natural environment because of the high concentrations of the corrosive element and the high temperatures, neither of which generally occur in the atmosphere. The ASTM B117 is, however, a valuable test that impacts all industries in terms of quality control. It helps in determining suitable materials for a specific environment and gives useful information that can be used when developing coatings and new materials. The main sectors that benefit from these tests are the aircraft, automotive, transport, infrastructure and paint industries. |
While most of the thousands of nematode species on Earth are not harmful, some cause diseases in humans and other animals or attack and feed on living plants. Luckily, there are ways to deter these pesky pests from disrupting your garden soil.
The few parasitic species of these translucent, unsegmented worms measure about 1/50 inch long and cause root knots or galls, injured root tips, excessive root branching, leaf galls, lesions or dying tissue, and twisted, distorted leaves. Plants most commonly attacked at the roots include cherry tomatoes, potatoes, peppers, lettuce, corn, and carrots. Plants that sustain leaf and stem system injury include chrysanthemums, onions, rye, and alfalfa.
What Are Nematodes?
Often referred to as roundworms, nematodes are not closely related to true worms. They are multicellular insects with smooth, unsegmented bodies. The nematode species that feed on plants are so tiny that you need a microscope to see them. The adults often look long and slender, although some species appear pear-shaped. These plant parasites are not the same roundworms as the filarial nematodes that infect the human body, spread diseases, and wreak havoc on the immune system.
Some nematodes feed on the outer surfaces of a plant while others burrow into the tissue. Soil-dwelling nematodes are the most common culprits, but some species can damage plant roots, stems, foliage, and flowers.
No matter where they feed, these tiny worms can seriously damage to crops with their sharply pointed mouths by puncturing cell walls. The real damage occurs when a nematode injects saliva into a cell from its mouth and then sucks out the cell contents. The plant responds to the parasitic worms with swelling, distorted growth, and dead areas. Nematodes can also carry viruses and bacterial diseases inject them into plants. The feeding wounds they make also provide an easy entrance point for bacteria and fungi.
Beneficial nematodes that enrich the soil may feed on the decaying material, insects, or other nematodes.
What Nematodes Look Like
Unlike most other disease-causing organisms, plant-parasitic nematodes seldom produce any characteristic symptoms. Most of the symptoms that do appear are vague and often resemble those caused by other factors — such as viruses, nutrient deficiencies, or air pollution. Nematodes feeding aboveground may cause twisted and distorted leaves, stems, and flowers.
If nematodes are feeding on the roots, a plant may look yellowed, wilted, or stunted and infected food crops will usually yield poorly. If you suspect worm injury to roots, carefully lift one of the infected plants and wash off the roots for easier inspection. If nematodes are causing damage, you may see small galls or lesions, injured root tips, root rot, or excessive root branching.
How They Spread
Whether they feed above or below ground, most nematodes spend at least part of their life cycle in the soil. While they can’t move very far under their own power, they can swim freely in water and they move more quickly in moist soil — so it's a good idea to keep your soil well-drained. They also spread by anything that can carry particles of infested soil, including tools, boots, animals, and infected plants.
What About Beneficial Nematodes?
Beneficial nematodes can range from 1/25 inch to several inches long and have slender, translucent, unsegmented bodies. Their roles in the garden vary. Some are soil dwellers that break down organic matter, especially in compost piles. You can easily spot these 1/4-inch-long decomposers.
These types actually combat a variety of pest species, including weevils, clearwing borers, cutworms, sod webworms, chinch bugs, and white grubs. Nematodes attack and kill these insects by either injecting deadly bacteria or entering the host, parasitizing, and then feeding on it.
Whenpurchasing and applying them to your garden, it is very important to select the right species because different kinds of nematodes are effective against different pests. In addition, nematodes require moist, humid conditions and fairly warm soil to do their job well. Water all application sites before and after spreading nematodes and follow application instructions carefully. |
The issue of racism had been discussed and seen through institutions throughout the years. Some of these include, schools, the media, entertainment and more currently in social media platforms. Racism is defined by philosophers as the belief that human beings are biologically divided into different races. Even though many people would like to believe it is no longer an issue, this is simply not the case. The fact there is still evident racial inequalities in the 21st century goes to show the magnitude of the problem. The school systems have faced a lot of this problem firsthand. Students in minority schools continue struggle while students in white schools continue to excel (Noguera 2017). Racial inequality is a problem that needs to be addressed particularly in the United States, as we see it very often. African American and Hispanics face a higher possibility of not attaining the same level of education as white kids. Racial inequality is a serious issue that needs more attention. Although many administrations and legislators have attempted to deter this problem it has not proven effective. Due to underlying issues such as economic and social disparities make it challenging. Racial inequality rather than decrease and subside, has increased and gotten worse than before.
According to data collected by the Civil Rights data collection, between 2015 and 2016, more than 96,000 minority schools were facing investigations due to racial inequality. The white students from these elementary schools face less harsh punishments compared to black and Hispanic students (Noguera, 2017). Controlling racial inequality, which is a national issue, would help to improve the academic performance of children in schools.
The research involved detailed reviews of different scholarly articles that gave an insight into how racial inequality in minority schools affect the education of students. The article used to draw a connection between racial inequality, academic performance, and economic inequality.
The research conducted detailed the impact racial inequality has on the education students receive in minority schools. The articles drew connections between racial inequality and economic and social classes on the academic performance of students. (Quintana & Mahgoub 2016). Segregation of minority schools and lack of funding from the government increases the chances of poor performance among African American students and Hispanic students. The environment lived by the different students also contribute to the academic gap seen between African American students, Hispanic students, and white students. Minority kids in schools have less educational resources because of their social economic status and a long history of inequality.
History of Inequality
The United States has a long and dark history when it comes to racial disparities. Institutional level of racism in educational institutions is a current issue in different countries. The issues, however, began early in the 1960s when the civil rights movement tried to confront institutional racism (Houkamau & Sibley, 2015). Segregation of Schools, churches, courts, and parks was according to race. The white privileges seen in their instructions include well-guarded compounds, while institutions involving people of color we abandoned and less developed. The issue is still a problem in this decade as white-dominated institutions such as schools receive more funding compared to minority schools. The difference in the institutions does not give a fairground for healthy competition among the students (Houkamau & Sibley, 2015). Research conducted in recent years indicate instances of institutional abandonment due to racial inequality.
As earlier mentioned, schools dominated by whites receive funding even during difficult times. The government is confident about the institutions and would, therefore, fund any project. If there is a perception that black students are increasing in schools, the government loses confidence with the institution and, therefore, cuts the budget. If other institutions such as courts do not intervene, the institutions are left to collapse as students suffer. Black and Hispanic school systems have experienced similar issues, while white-dominated schools with fewer students are left to prosper. Despite the economic development and prosperity, especially in developed countries, issues of funding minority schools are still debated. We would think that multiculturalism and diversity have helped to eliminate racism, but that is not the case. Bowser argues that the issue of racism began back when slavery was part of the culture of many nations. The origin of individual racism was as a result of social racism against people of color (Houkamau & Sibley, 2015). There exists a relationship between cultural racism, institutional racism, and individual racism.
Racial Inequality Impact on Education
Research indicates that teachers, especially in elementary schools, are biased, depending on the religion, social class, or race of the students. The difference in behavior demonstrated by teacher’s results in racial disparities, seen in how students behave around one another (Warikoo et al., 2016). Research done by Tenenbaum and Ruck (2017) indicated that teachers tend to be more vigorous while addressing African American and Hispanic students, compared to how they address white students. White students are considered braver and smarter compared to students of color. Appiah, a philosopher, claims that extrinsic racism is what brings about ethnic issues in our societies. He explains that extrinsic racism occurs when an individual believes that all individuals of a race share an inherited characteristic.
An extrinsic racist believes that some people, based on their race, are more intelligent, more industrious, kind, smart, courageous, and trustworthy than people from other races. Unfortunately, students in elementary schools observe these differences and begin to question the issue of racism at a very tender age. Traumatizing incidents, such as bullying, may affect the mental health of the affected student. Most public elementary schools are dominated by people of color and are often neglected or are under poor conditions. Poverty levels increased drastically during the Great Recisson of 2008 and because of it, the percentage of children coming from families in poverty went up to 22% (Putnam 2015). Now, approximately 52% of students come from low income families, the highest percentage recorded by The National Center for Education (Noguera, 2017).
School Segregation and Academic Gap
Research conducted by Reardon (2016) indicates the effects of segregation on the academic performance of students. Segregation in elementary schools causes a huge academic gap between the privileged white and the less privileged Black and Hispanic students. It was evident that segregation and inequality is a major known issue that affects academic performance. As the economic gap increases between different social classes, racial inequality also increases (Reardon, 2016). Lack of funding from the local and national governments has also resulted in more incidences of racial inequality. Civil rights groups reported the challenges that the minority have in accessing educational facilities (Noguera, 2017). The difference in academic performance between the minority schools and free schools is accounted for by a lack of concern from the national government. In areas where African American students and Hispanic students live, they are less developed, which affects their academic performance.
Underfunding is a significant issue that the government should address. According to Kochhar & Fry (2014), there is a close relationship between racial inequality, economic inequality, and academic performance. Most of the immigrants living in the United States are forced to attend public schools due to the low income earned (Noguera, 2017). Overcrowding in the schools makes it hard for the management team to accommodate all students adequately. Despite the increase in students in public schools, funding from the government it still limited. Unfortunately, the white-dominated schools with fewer students receive more money, which is used to develop new facilities, resulting in better academic performance. Racism in segregated schools affects not only the students but the morale of the teachers (Reardon, 2016).
Coleman researched in the United States following complaints of school segregation. The findings from the research were alarming. More than 90% of the black students attended educational institutions that were dominated by blacks, while 90-100% of white students attended white-dominated educational institutions (Quintana & Mahgoub, 2016). The academic gap between these institutions was a significant concern. The performance in the minority schools was lower compared to white-dominated schools. The facilities used in the minority schools were poorly maintained and were fewer compared to those in other communities. The research by Coleman also indicated the adverse effects of socioeconomic inequality, which contributed to segregation in the schools.
Segregation in school systems, for example, leads to racial gaps since the quality of education and racial composition are positively correlated. As earlier discussed, there is a vast difference between schools dominated Whites and those dominated by Black and Hispanic students. Residential segregation, which is the environment where children live, also impacts their academic performances (Quintana & Mahgoub, 2016). Research indicates that most black students in elementary schools come from an insecure neighborhood, which makes it more difficult for them to study. The white students mainly come from the suburbs and can study in their homes. The difference between the two environments affects academic performance. There is a connection between social economic status and achievement gaps (Quintana & Mahgoub, 2016). Segregation in schools also affects the turnover of teachers. It is challenging to attract and retain highly qualified teachers in minority schools because of the low salaries. Insecurity in residential segregation also affects the turnover of teachers, thus, affecting the academic performance of students.
School-Based Racial Discrimination
The most affected individuals in school-based racial discrimination are mainly African American students. Byrd (2015) Those students affected by racial disparities in schools resulted with low self-esteem issues and poor performance (Banerjee et al., 2018). African American students move from one school to another, increasing the chances of unfairness and discrimination based on their race. According to Benner and Graham (2013), there is an association between racial discrimination in elementary school and discrimination from teachers and peers. To understand how these racial disparities, continue to prevail in society, techniques such as Implicit Association Tests are used to show the perception and prejudice of people based on their race. Explicit attitudes are the beliefs about a race and can often be expressed in writing or during oral communication (Warikoo et al., 2016). Implicit attitudes, on the other hand, are involuntary actions, mainly as a result of prejudice. Teachers often display implicit attitudes when addressing students of color. Adverse effects of implicit attitudes towards students of color affect their ability to learn as other white students (Warikoo et al., 2016). Students that are bullied often become insecure about themselves and unsafe in schools. Children that face such kinds of discrimination develop psychological problems that may affect them in the present and future. Discrimination from teachers and peers’ results in a lack of concentration in class, resulting in poor academic performance. Research done by Medvedeva (2010) found that immigrant elementary students who have been discriminated against develop poor communication skills and learn slowly compared to other students (Banerjee et al., 2018).
Why is it still an issue?
The critical race theory, popularized by Kimberle Crenshaw and Derrick empathized that despite the civil rights movements, the lives of colored people had not improved. The theory notes that racism is embedded into American society. Race continues to be a subject matter on a global scale, and racism continues to affect the people of the color adversely (Ladson-Billings & Tate, 2016). Given its history of slave, race, and racism, the United States has often been used and singled out for analysis using this theory. Some sociologists and philosophers today argue that the issue and instances of racism have decreased in recent years compared to how it was in the 18th and 19th Centuries. The theory explains the marginalization and segregation of different institutions, including educational institutions. Educational policies have often affected people of color negatively (Ladson-Billings & Tate, 2016). According to Delgado, the tenets of critical theory are essential due to the educational system of the United States.
The issue of racial inequality in minority schools significantly affects the academic performance of students in elementary. Other contributing factors, such as socioeconomic inequality, also affect students who are African American or Hispanic. Addressing the segregation of schools is vital as it affects the education of students. Racism at the institutional level has increased in recent years. Students are no longer treated in the same way at schools. Educational institutions where the majority are whites are likely to receive more funding and prioritized as opposed to those who have students who are African American. The white communities have better facilities compared to people of color. Racism has many negative impacts, on various aspects of society.
Institutional racism has been essential in facilitating the increase of white dominance and white privilege. Institutional racism is responsible for the reinforcement of both cultural and individual racism. Racial discrimination has often resulted in mental health issues. Children who have experienced discrimination lose their confidence and are often sad, which may lead to depression. Such children have a low concentration in class, which often results in low performance. |
Photochemistry using inexhaustible solar energy is an eco-friendly way to produce fine chemicals outside the typical laboratory or chemical plant environment. However, variations in solar irradiation conditions and the need for an external energy source to power electronic components limits the accessibility of this approach.
Professor Timothy Noël and co-workers in the Flow Chemistry group of the University of Amsterdam’s Van’t Hoff Institute for Molecular Sciences have developed a fully operational standalone solar-powered mini-reactor which offers the potential for the production of fine chemicals in remote locations on Earth, and possibly even on Mars. In a paper published by ChemSusChem, the team present their unique, fully off-grid photochemistry system.
The new system, which is capable of synthesizing drugs and other chemicals in economically relevant volumes, “shines in isolated environments and allows for the decentralization of the production of fine chemicals,” according to Professor Noël. “The mini-plant is based on the concept of photochemistry, using sunlight to directly ‘power’ the chemical synthesis. We employ a photocatalyst, a chemical species that drives the synthesis when illuminated,” Noël continues. “Normally powerful LEDs or other lighting equipment are used for the illumination, but we choose to use sunlight. For starters, this renders the synthesis fully sustainable. But it also enables stand-alone operation in remote locations. Our dream is to see our system used at a base on the Moon or on Mars, where self-sustaining systems are needed to provide energy, food and medicine. Our mini-plant could contribute to this in a fully autonomous, independent way.”
Development of the mini-plant started around five years ago when the Noël research group—at the time based at Eindhoven University of Technology—developed a “solar concentrator.” This is essentially a sheet of transparent plastic with micrometer-sized channels in which the chemical synthesis takes place. By adding dedicated dyes, the researchers developed the plastic into a solar guide and luminescent convertor. It captures sunlight and directs it towards the channels, while converting a substantial part of the light into red photons that drive the chemical conversion.
The research group had already demonstrated the solar flow reactor concept by synthesizing a range of medicinally relevant molecules, albeit on a laboratory scale in a controlled environment. Now, in their recent paper in ChemSusChem, they describe how they developed a viable, optimally effective autonomous photosynthesis system and employed it in field tests. They also provide an outlook on aspects such as application potential and economic performance.
The prototype solar flow reactor now covers an area of about 0.25 square meters. To make it fully autonomous, the researchers equipped it with a solar cell that provides the power for auxiliaries such as pumps and the control system. This solar cell is placed behind the flow reactor in a stacked configuration that ensures maximum efficiency per square centimeter, according to the authors. The more energetic wavelengths are used in the reactor to drive the photocatalyst. The remaining photons with wavelengths of 600-1100 nm are converted to electricity to drive the auxiliaries.
The researchers also compared the performance of the prototype system with production figures for the well-known photochemical synthesis of rose oxide. This product for the perfume industry is industrially produced by photochemical means because it is cleaner and more efficient than traditional chemical synthesis. The researchers calculated that a surprisingly small surface area would be required for their system to meet current annual demand—just 150 m2 would suffice. The system cost would be similar to current commercial photosynthesis systems. Only needed solar energy so there are no energy expenditures. This could be a sustainable strategy for future production of chemicals such as rose oxide or pharmaceuticals.
The authors demonstrate that there are opportunities for solar-driven chemical production in different places from hot to cold areas. ” What’s more, the system lends itself to application in unexpected locations. It is possible to even cover the facade of a building. Of course the output would then be smaller than when the system is placed at an optimal angle to the sun. But it certainly is possible—and how cool would it be to have the walls make chemicals!
Tom M. Masson, Stefan D. A. Zondag, Koen P. L. Kuijpers, Dario Cambié, Michael G. Debije, Timothy Noel. Development of an off‐grid solar‐powered autonomous chemical mini‐plant for producing fine chemicals. ChemSusChem, 2021; DOI: 10.1002/cssc.202102011 |
When we ask ourselves what our students should learn in our course, or what they should be able to do by time it ends, our answers reflect our learning objectives for the course. These might go something like, "Students should be able to compare the causes of the Civil War in terms of their relative importance," or "Given a chemical modification to a nucleotide in a base pair, explain how it can lead to a mutation if left uncorrected." With these sorts of a skills-based objectives in place, though, backward design prompts us further to consider the conditions under which students are most likely to meet these goals: How will students demonstrate their learning? How will we know that they know?
These questions are where assessment design becomes important. In order for students to demonstrate whether they've accomplished the objective we set, the paper they write or the exam they take has to evaluate that objective specifically. Both the form and content of the assessment can facilitate (or hinder) that evaluation. Asking students to provide dates of battles, names of generals, and pre-war compromises assesses recall of facts, but not analysis or argumentation. Similarly, a multiple choice format that asks students to pick the most important cause of the war out of a set of four fails to provide the opportunity for in-depth comparison. Our first sample objective from above, for example, is probably best assessed through an essay format, one that asks students to compare more and less important causes of the Civil War.
Once we've designed an assessment, our final task—which brings us around to the "front" of Backward Design—is to plan our teaching, which should enable students to complete the assessment successfully, thereby demonstrating their learning. Lecturing, primary and secondary readings, student presentations, and debates are all possible ways of structuring student learning towards the eventual assessment. And because we've designed our assessment based on a worthwhile objective, it now becomes appropriate—and not perfunctory—to "teach to the test." After all, we want students to meet (or exceed) the challenge we set for them, especially when their accomplishment is proof that they have learned exactly what we intended them to learn and learn to do from the very beginning—there's nothing "backward" about that. |
Database Design Courses
All Database Design courses
Why learn on Udemy?
Learn in-demand skills with over 213,000 video courses
Choose courses taught by real-world experts
Learn at your own pace, with lifetime access on mobile and desktop
Learn more about Database Design
Frequently asked questions
A relational database is a type of database that stores data that is organized in structures called tables and these tables are related to one another through defined relationships. Each table has columns and rows. Columns represent attributes and define a structure to the data. Each row is a record of information stored in the table. The relationships between tables are defined by assigning certain columns as foreign and primary keys. For example, a vehicle table might contain the VIN number as a primary key since that uniquely identifies each record of information about a car. There may be another table in the database called dealerships. The data can be structured in such a way that each car belongs to some dealership. The relationship between these 2 tables can be organized by having the dealership’s key in the vehicles table. So that every vehicle has a dealership associated with it representing where that vehicle belongs.
Free Database Design lessons
Bite-sized learning in minutes |
What Is HIV and What Is The Process That a Person Has to Go for an HIV Test?
Human immunodeficiency virus is a slowly – replicating retrovirus. It causes acquired immunodeficiency syndrome (AIDS) a condition in humans in which progressive failure of the immune system allows life-threatening opportunistic infections and cancers to thrive. AIDS is the late stage of HIV infection, when a person’s immune system is severely damaged and has difficulty fighting diseases and certain cancers. HIV tests are used to detect the presence of the human immunodeficiency virus (HIV), the virus that causes acquired immunodeficiency syndrome (AIDS), in serum, saliva or urine. Such tests tell you if you are infected with HIV or not. According to a rough estimate given by the World Health Organisation (WHO), as of 2000, inadequate blood screening had resulted in 1 million new HIV infections worldwide.
1 in 4 new HIV infections occurs in youth aged between 13-24 years, according to CDC.
There are two types of HIV virus, the HIV-1 and the HIV-2. Unless noted otherwise, in the United States, the term “HIV” primarily refers to the HIV-1. HIV-2 infections are predominantly found in Africa.
Both HIV-1 and HIV-2 works by infecting and lowering the levels of the CD4+T cells, cells which are crucial to help the body fight diseases. When CD4+T cell numbers decline below a critical level, the body becomes progressively more susceptible to opportunistic infections. HIV-1 and HIV-2 appear to package their RNA differently. Tests indicate that HIV-1 is better able to mutate (HIV-1 infection progresses to AIDS faster than HIV-2 infection and is responsible for the majority of global infections).
HIV is spread through blood and genital fluids, including pre-seminal fluids and semen or breast milk. One can become infected with HIV by engaging in unprotected sex or other types of sexual behavior with an HIV-positive person, or by sharing needles, syringes or other injection equipment with someone who is infected with HIV.
It generally takes a little while to get accurate results from an HIV test. This is because the blood tests that you take are not testing for the presence of HIV itself in your blood but are instead testing for the antibodies that your body creates in an attempt to fight the virus. Many HIV-positive people are unaware that they are infected with the virus.
The amount of time required for antibodies to show up on HIV tests is highly variable, as they can show up as early as two weeks or as late as six months. During that period, you can test HIV negative even though you are infected with the virus. You can still catch HIV from someone who is in the window period. Since donors are unaware of their infections, donated blood and blood products used in medicine are routinely checked for the HIV virus.
Both you and your partner should get tested for HIV and know your status before having sex for the first time. Pregnant women should be positively tested during each pregnancy. If the mother is infected with HIV, care should be taken of minimizing the chance of passing the virus to the baby. Medicines are available today, which taken properly during pregnancy can surely lower the risk of passing the HIV virus to the baby.
The most commonly used HIV test is a blood test. Blood will first be tested using the ELISA (enzyme-linked immunosorbent assay) test. If antibodies to HIV are present in the serum, they may bind to these HIV antigens.
ELISA results are reported as a number. If ELISA test is positive, the results will be confirmed using the Western Blot test, which tests only for HIV antibodies. In the United States such ELISA results are not reported as ‘positives’, unless confirmed by a Western Blot test. Newer HIV tests can detect HIV antibodies in mouth fluids (not saliva), a scraping from inside the mouth or urine. In 2012, the FDA approved the first “in-home” HIV test. It uses a mouth swab and show results in 30-40 minutes. Any positive test result should be confirmed by a lab using the Western Blot.
After a negative sixth-month test, it is recommended that individuals get one final check done six months later to confirm the results. If the results are still negative, it is almost certain that the person is not infected with HIV.
HIV is similar to other viruses, such as those that cause cold and flu, with one important difference – the human body cannot get rid of HIV. That means. If you get HIV, you get it for LIFE.
Right now we are at a critical moment in the fight against HIV/AIDS.
AIDS used to be a death sentence, but now more than 8 million people are on life-saving treatment. By 2015, with the scale up of treatment and prevention for HIV, we could see the beginning of the end of AIDS.
Source by Sounak K Ghosh |
ORIGIN STORY: Agricultural Development, c. 8,000 years before present (ybp)
“The Anthropocene actually began thousands of years ago as a result of the discovery of agriculture and subsequent technological innovations in the practice of farming.” William F. Ruddiman (2003, 2005, 2010)
This story explains the records of human-caused increases in greenhouse gases trapped in glacial ice. Beginning the story of the Anthropocene with the rise of agriculture, particularly the cultivation of rice, links environmental change with the most fundamental requirement of human life: our need to eat. This story makes human impact on the environment inevitable; it is difficult to imagine a solution to the problem of human impact in a return to a simpler time. This story may, however, foster self-consciousness about eating practices and associated technologies, including domestication of animals, plants, land, soil, and water. It asks us to consider the consequences of cultivation and domestication and to ask how we might use agricultural technologies in ways that actually benefit the land. |
The concept of a canary in a coal mine — a sensitive species that provides an alert to danger — originated with British miners, who carried actual canaries underground through the mid-1980s to detect the presence of deadly carbon monoxide gas. Today another bird, the Emperor Penguin, is providing a similar warning about the planetary effects of burning fossil fuels.
As a seabird ecologist, I develop mathematical models to understand and predict how seabirds respond to environmental change. My research integrates many areas of science, including the expertise of climatologists, to improve our ability to anticipate future ecological consequences of climate change.
Most recently, I worked with colleagues to combine what we know about the life history of Emperor Penguins with different potential climate scenarios outlined in the 2015 Paris Agreement, to combat climate change and adapt to its effects. We wanted to understand how climate change could affect this iconic species, whose unique life habits were documented in the award-winning film “March of the Penguins.”
Our newly published study found that if climate change continues at its current rate, Emperor Penguins could virtually disappear by the year 2100 due to loss of Antarctic sea ice. However, a more aggressive global climate policy can halt the penguins’ march to extinction.
Carbon Dioxide in Earth’s Atmosphere
As many scientific reports have shown, human activities are increasing carbon dioxide concentrations in Earth’s atmosphere, which is warming the planet. Today atmospheric CO2 levels stand at slightly over 410 parts per million, well above anything the planet has experienced in millions of years.
If this trend continues, scientists project that CO2 in the atmosphere could reach 950 parts per million by 2100. These conditions would produce a very different world from today’s.
Emperor Penguins are living indicators whose population trends can illustrate the consequences of these changes. Although they are found in Antarctica, far from human civilization, they live in such delicate balance with their rapidly changing environment that they have become modern-day canaries.
A Fate Tied to Sea Ice
I have spent almost 20 years studying Emperor Penguins’ unique adaptations to the harsh conditions of their sea ice home. Each year, the surface of the ocean around Antarctica freezes over in the winter and melts back in summer. Penguins use the ice as a home base for breeding, feeding and molting, arriving at their colony from ocean waters in March or April after sea ice has formed for the Southern Hemisphere’s winter season.
In mid-May the female lays a single egg. Throughout the winter, males keep the eggs warm while females make a long trek to open water to feed during the most unforgiving weather on Earth.
When female penguins return to their newly hatched chicks with food, the males have fasted for four months and lost almost half their weight. After the egg hatches, both parents take turns feeding and protecting their chick. In September, the adults leave their young so that they can both forage to meet their chick’s growing appetite. In December, everyone leaves the colony and returns to the ocean.
Throughout this annual cycle, the penguins rely on a sea ice “Goldilocks zone” of conditions to thrive. They need openings in the ice that provide access to the water so they can feed, but also a thick, stable platform of ice to raise their chicks.
Penguin Population Trends
For more than 60 years, scientists have extensively studied one Emperor Penguin colony in Antarctica, called Terre Adélie. This research has enabled us to understand how sea ice conditions affect the birds’ population dynamics. In the 1970s, for example, the population experienced a dramatic decline when several consecutive years of low sea ice cover caused widespread deaths among male penguins.
Over the past 10 years, my colleagues and I have combined what we know about these relationships between sea ice and fluctuations in penguin life histories to create a demographic model that allows us to understand how sea ice conditions affect the abundance of Emperor Penguins, and to project their numbers based on forecasts of future sea ice cover in Antarctica.
Once we confirmed that our model successfully reproduced past observed trends in Emperor Penguin populations around all Antarctica, we expanded our analysis into a species-level threat assessment.
Climate Conditions Determine Emperor Penguins’ Fate
When we used a climate model linked to our population model to project what is likely to happen to sea ice if greenhouse gas emissions continue on their present trend, we found that all 54 known Emperor Penguin colonies would be in decline by 2100, and 80% of them would be quasi-extinct. Accordingly, we estimate that the total number of Emperor Penguins will decline by 86% relative to its current size of roughly 250,000 if nations fail to reduce their carbon dioxide emissions.
However, if the global community acts to reduce greenhouse gas emissions and succeeds in stabilizing average global temperatures at 1.5 degrees Celsius (3 degrees Faherenheit) above pre-industrial levels, we estimate that Emperor Penguin numbers would decline by 31% — still drastic, but viable.
Less-stringent cuts in greenhouse gas emissions, leading to a global temperature rise of 2°C, would result in a 44% decline.
Our model indicates that these population declines will occur predominately in the first half of this century. Nonetheless, in a scenario in which the world meets the Paris climate targets, we project that the global Emperor Penguin population would nearly stabilize by 2100, and that viable refuges would remain available to support some colonies.
In a changing climate, individual penguins may move to new locations to find more suitable conditions. Our population model included complex dispersal processes to account for these movements. However, we find that these actions are not enough to offset climate-driven global population declines. In short, global climate policy has much more influence over the future of Emperor Penguins than the penguins’ ability to move to better habitat.
Our findings starkly illustrate the far-reaching implications of national climate policy decisions. Curbing carbon dioxide emissions has critical implications for Emperor Penguins and an untold number of other species for which science has yet to document such a plain-spoken warning. |
To study decomposition reactions.
What is a decomposition reaction?
Decomposition is a type of chemical reaction. It is defined as the reaction in which a single compound splits into two or more simple substances under suitable conditions. It is just the opposite of the combination reaction.
In a combination reaction, a substance is formed as a result of chemical combination, while in a decomposition reaction, the substance breaks into new substances.
For example: The digestion of food in our body is accompanied by a number of decomposition reactions. The major constituents of our food such as carbohydrates, fats proteins, etc., decompose to form a number of simpler substances. These substances further react, releasing large amounts of energy, which keeps our body working.
The general equation that describes a decomposition reaction is:
Types of Decomposition Reactions
Decomposition reactions can be classified into three types:
- Thermal decomposition reaction
- Electrolytic decomposition reaction
- Photo decomposition reaction
Thermal decomposition is a chemical reaction where a single substance breaks into two or more simple substances when heated. The reaction is usually endothermic because heat is required to break the bonds present in the substance.
Photo decomposition is a chemical reaction in which a substance is broken down into simple substances by exposure to light (photons).
Why are decomposition reactions mostly endothermic in nature?
Most decomposition reactions require energy either in the form of heat, light or electricity. Absorption of energy causes the breaking of the bonds present in the reacting substance which decomposes to give the product.
- Students understand the characteristics of a decomposition reaction & different types of such reactions.
- Students identify the compounds that may give a decomposition reaction.
- Students acquire skills to perform a decomposition reaction in the lab.
- Students will be able to distinguish a decomposition reaction from a given set of chemical reactions.
Let’s discuss the decomposition reaction of ferrous sulphate crystals by the action of heat. |
Physical Activity in Children
The American Academy of Family Physicians (AAFP) recognizes that regular physical activity is essential for healthy growth and development and encourages that all children and adolescents accumulate at least 60 minutes of moderate to vigorous aerobic physical activity every day.
Regular physical activity is correlated with numerous health benefits, including improved cardiovascular health and reduced risk of obesity. Additionally, regular physical activity has been shown to improve cognitive and academic performance and promote psychological well-being.
Family physicians can be leaders in promoting regular physical activity in their patients and communities by partnering with individuals, families, and schools. The AAFP recognizes that interventions must go beyond individual behavior-change strategies, and address the environmental factors that influence opportunities to engage in healthy activities. The AAFP encourages its members to become aware of the conditions in their communities that promote or hinder healthy activities and to partner with individuals, families, and schools to reduce barriers to engaging in physical activity. (2006) (2018 COD) |
Finnish researchers find explanation for sliding friction
Friction is a key phenomenon in applied physics, whose origin has been studied for centuries. Until now, it has been understood that mechanical wear-resistance and fluid lubrication affect friction, but the fundamental origin of sliding friction has been unknown.
Dr. Lasse Makkonen, Principal Scientist at VTT Technical Research Centre of Finland, has now presented an explanation for the origin of sliding friction between solid objects. According to his theory, the amount of friction depends on the surface energy of the materials in question.
Friction has a substantial effect on many everyday phenomena, such as energy consumption. Makkonen's model is the first to enable quantitative calculation of the friction coefficient of materials.
According to Makkonen's theory, the amount of friction is related to the material's surface energy. Friction originates in nanoscale contacts, as the result of new surface formation. The theory explains the generation of frictional force and frictional heating in dry contact. It can be applied in calculating the friction coefficient of various material combinations. The model also enables the manipulation of friction by selecting certain surface materials or materials used in lubrication layers, on the basis of the surface energy between them.
Makkonen's theory on sliding friction was published in the journal AIP Advances of the American Institute of Physics. The research was funded by the Academy of Finland and the Jenny and Antti Wihuri Foundation.
More information: A thermodynamic model of sliding friction: aipadvances.aip.org/resource/1 … dbi/v2/i1/p012179_s1
Journal information: AIP Advances
Provided by VTT Technical Research Centre of Finland |
Md. researchers study how flu is spread
|Professor Don Milton, director of the Maryland Institute for Applied Environmental Health in the School of Public Health at the University of Maryland at College Park, at left, demonstrates the G2 machine, January 15, 2013. Research associate, Jovan Pantelic, seated at right, shows how a volunteer with early flu symptoms would be seated with his face in the metal cone to capture exhaled droplets. These are collected by the machinery for analysis in the lab. (Amy Davis/Baltimore Sun/MCT)|
The practices are based on the belief that the flu and other viruses pass from person to person through indirect or direct contact. Somebody coughs in another's face, or an infected person touches a doorknob that dozens of others then grab, and the disease spreads.
But what if the flu isn't transmitted by direct or indirect contact? What if flu virus particles linger in the air, and people can catch the disease just by breathing?
That is a possibility researchers at the University of Maryland's School of Public Health are examining in a study that looks at the transmission of the flu virus and how it might infect people.
"Do kids in school get the flu because they are not washing their hands, or is there not enough air circulating in the classroom?" asked Dr. Donald K. Milton, a University of Maryland professor leading the research.
By improving their understanding of how the flu is transmitted, doctors can come up with better ways to treat the disease and curb its spread, Milton said. If flu particles float in the air, hand washing may not be enough of a deterrent. Face masks or other methods may provide more protection, he said.
The research comes in the midst of the nation's worst flu outbreak in a decade, with federal health officials recently declaring it an epidemic. As of Jan. 12, more than 18,000 Marylanders had visited doctors' offices and emergency rooms with influenza-like symptoms. One child has died. The season started earlier than normal, and patients are showing symptoms much more severe than in a typical year.
There is not much doctors can do to treat the flu, which normally dissipates on its own. Anti-virals are given to high-risk patients. That's why public health officials focus on preventing spread of the disease instead.
The question about how the flu virus is spread is highly controversial, said Dr. Trish Perl, senior epidemiologist for Johns Hopkins Medicine, who is conducting a separate study on reducing the flu's spread through better-made face masks.
Preventive flu protocol is based largely on studies from the 1930s and '50s that found the virus is spread by direct and indirect contact, Perl said. The belief has been questioned in recent years, but not enough studies have been conducted to support any kind of change, Perl said.
The issue arose in the '70s, when an Alaskan Airlines plane was grounded and a large number of the passengers caught the flu from an infected man sitting in the back. People argued that the lack of circulation and close quarters on the plane created an environment conducive to airborne travel of the virus. Others contended that passengers came in contact with the man when they used the restroom, which was near his seat, Perl said.
"As people have learned more about flu, there are more questions about whether we can make certain assumptions about the (virus)," said Perl, who added that more studies of the virus are needed
At a lab in College Park, the University of Maryland researchers are using a machine called the Gesundheit II to measure how much virus somebody who has the flu puts into the air. After a subject sticks their head inside a funnel, air is pulled in around the head and ends up in a collector, Milton said.
The collector accumulates every droplet, some as small as 50 nanometers, taken from a person's breathing, coughing and sneezing. Scientists then measure the amount of virus shed via droplet sprays, indicating indirect and direct contact, against the amount of tiny airborne particles that other people could inhale.
In the spring, the University of Maryland scientists will work on research in the United Kingdom that will put healthy people in a room with those infected with the flu. Half of those infected will get face shields, allowing researchers to see if that helps prevent spread of the disease.
Milton said understanding the flu's transmission could be especially important if there is ever a pandemic and no vaccine is immediately available to treat people. Stopping the spread would be crucial, he said.
"We probably can't stop flu with public health measures," Milton said. "But if we really understand it, we can slow it down so we can have time to make a vaccine work."
- Billboard seeking kidney nets chain of giving
- Picking a gym? Do your legwork
- Toeless yoga socks stay fee and tactile
- Light eating
- Md. researchers study how flu is spread
- Women have caught up to men on lung cancer risk
- More babies squeezing organic food from pouches
- Perform black magic with brown butter
- Re-wiring 'King of the Dance Floor'
- Simply understanding Type 2 diabetes |
CCSS.Math.Content.3.OA.B.5: Apply properties of operations as strategies to multiply and divide.
Most math instruction for younger elementary students (K-2) is based around number sense. Students are given opportunities to compare and contrast numbers, add them up, subtract them, identify place values and solve basic word problems. In third grade, students are asked to apply this knowledge to explore and recognize patterns and relationships between addition, multiplication and division.
Inquiry and Student-Centered Learning
For this standard, the learning goal is for students to understand and apply the distributive, associative and commutative properties of multiplication. In short, students should be able to explain and demonstrate the relationships between multiplication, division and addition.
This standard lends itself to inquiry and student-centered learning, since exploration and pattern recognition can be easily done through discussion between peers. Divide students into groups (heterogeneous or random, no more than four to a group), and give each group a piece of chart paper. Hand an equation to each group (i.e. 8 X 6 = 48) and give them a challenge. Tell each group that they have five minutes to come up with ten different ways to get the answer (in the case of the example, 48) using addition, multiplication and division. Have each group record their equations on the chart paper.
Once time is up (and you can always provide more time if necessary), have students do a gallery walk to view other groups' chart papers. If you have tech available, have students use FastFig or Wolfram Alpha to check their classmates' work. Give each group a feedback paper so that visiting groups can leave corrections there.
After everyone has seen everyone else's work, bring the class back together and discuss the activity. Ask things like "What did you notice?" Ask students to identify their favorite or what they considered the most high-level equation they saw, and have them explain why they chose it.
Alternatively, you could have each group choose a reporter to announce their equations and solicit verbal feedback from the class.
This lesson will require that students have been exposed to both group work and collaboration, and that they have been exposed to this kind of lesson before. It would make sense to first have them practice the three properties of multiplication as a whole class and individually.
Make sure to give students the challenge of finding ten different equations, even if you are sure they will not be able to. This will keep them working more than if you just say "as many as you can." You never know -- you may be pleasantly surprised!
If you know that you have students who struggle with these concepts, it may also be a good idea to intentionally create groups that are as heterogeneous as possible. Make sure to circulate during the activity, listening in on student conversation, asking questions and ensuring that all students get an opportunity to give input and participate. For both special education and ELL students, you may want to prep them for the lesson with extra vocabulary support or by giving them strategies to participate in the group if they have trouble communicating with their peers. For instance, to minimize misunderstandings, have them write down an equation on paper and show it to the group while explaining.
Once students have shown mastery of these concepts, introduce them to a game such as Math 24 (which can also be played online at www.mathplayground.com/make_24.html), and run Math 24 Challenge tournaments in the classroom. |
South American Indian Religions: History of Study
SOUTH AMERICAN INDIAN RELIGIONS: HISTORY OF STUDY
Systematic study of South American indigenous religions began with the arrival of the first Europeans. Almost immediately after landing in the New World, scholars, priests, scribes, and soldiers began describing and assimilating the Indians' peculiar and, to them, outlandish practices for their Old World sponsors and public. The confrontation between these early explorer-chroniclers and their indigenous subjects established the basis of a religious opposition between Christian reformer and "pagan" Indian; and it is no exaggeration to say that these early accounts set the stage for all later scholarly and scientific studies of the continent's diverse religious traditions.
All early accounts of religion were driven by the practical needs of empire. For the Spaniards, the political importance of understanding and analyzing native religious belief first arose through their encounters with the powerful Inca state of highland Peru. Chroniclers such as Juan de Betanzos (1551), Pedro Cieza de León (1553), and Cristóbal de Molina (1572) among others provided vivid accounts of imperial religion and Inca state mythologies. Two concerns tempered their descriptions and choice of subject matter: the spectacle of Inca rituals and the parallels they imagined to exist between their Christian millenarian and apostolic traditions and the natives' own beliefs in a "creator god" whose prophesized return coincided with—and thus facilitated—the initial Spanish conquests in Peru. Similar messianic beliefs among the Tupi-Guaraní of eastern Brazil attracted the attention of the explorers Hans von Staden (1557) and Antonie Knivet (1591). Their writings provide fascinating accounts of Tupi religion as part of an argument intended to prove the presence of the Christian apostle Thomas in South America long before its sixteenth-century "discovery." Such early accounts inevitably strike the modern-day reader as ethnocentric. The tone of these writings is understandable, however, since their purpose was to make sense of the new cultures and peoples they met within the historical and conceptual framework provided by the Bible. Within this framework, there was only one "religion" and one true God. All other belief systems, including those encountered in the Americas, were judged as pagan. For some early theologians, the pagan practices of the South Americans placed them well outside the domain of the human. Others, however, believed the Americans were humans who had once known the true God and then somehow fallen from grace or were innocents with an intuitive knowledge of God. Early accounts of religious practices were driven by this desire to uncover evidence of the Indians' prior evangelization or intuitive knowledge of God. Catholic writers thus often interpreted the indigenous practices they observed by comparing them to such familiar Catholic practices as confession. In what is perhaps the most sympathetic account of a native religion, the Calvinist Jean de Léry made sense of the religious practices of the Brazilian Tupinambá Indians by comparing their ritual cannibalism to the Catholic Communion, in which Christians partook of the body and blood of Christ. De Léry's account suggests the extent to which all early inquiries into South American religions were inevitably colored by the religious and political lines drawn within Europe itself by the Reformation.
For Iberians, however, it was the Reconquista or Liberation of Catholic Spain from Moorish rule that lent the study of religion an urgent, practical tone. If Indian souls were to be recruited to the ends of the "one true religion," it was necessary to isolate and eradicate those aspects of the indigenous religions that stood in the way of conversion. Priests had to be instructed, catechisms written, and punishments devised for specific religious offenses. The ensuing campaigns to extirpate idolatries produced the first true studies of religion in the Andean highlands. Combining knowledge of Christian doctrine and missionary zeal with an increasing practical familiarity with indigenous life, theologians and priests such as José de Acosta (1590), José de Arriaga (1621), Cristóbal de Albornóz (c. 1600), and Francisco de Ávila (1608) set out to define in a rigorous and scholarly way the parameters of indigenous religion.
A few indigenous and mestizo writers sought to vindicate their culture and religion from the attacks of these Catholic campaigners, in the process contributing greatly to the historical study of Andean religion. Among the most interesting of the indigenous chronicles is an eleven-hundred-page letter to the king of Spain written between 1584 and 1614 by Felipe Guamán Poma de Ayala, a native of Ayacucho, Peru, who had worked with the extirpation campaigns. Other native accounts include the chronicle of Juan de Santacruz Pachacuti Yamqui Salcamaygua (c. 1613) and the monumental History of the Incas (1609), written by the half-Inca Garcilaso de la Vega. These native writers defended the goals but not the cruel methods, of Christian conversion and defended many native beliefs and practices as more just and rational than the abuses of the Spanish colonizers.
Other chronicles record European reactions to religions of the Amazonian lowlands; these include among others the travel accounts of Claude d'Abbeville (1614) and Gaspar de Carvajal, a priest who accompanied the first exploratory voyage up the Amazon River system in 1542. But if what the Europeans understood by "religion"—that is, hierarchies, priests, images, and processions—fit in well with what they found in the Andean state systems, it differed markedly from the less-institutionalized religions of the tropical forest region. Accounts of lowland religions were accordingly couched in an exaggerated language stressing atrocity, paganism, and cannibalism. Such emphases had more to do with prevailing European mythologies than with the actual religious beliefs of tropical forest peoples.
This early literature on Andean religion provided irreplaceable data about ritual, dances, offerings, sacrifices, beliefs, and gods now no longer in force—including, in the case of Guamán Poma's letter, a sequence of drawings depicting indigenous costume and ritual and, in the chronicle of Francisco de Ávila, a complete mythology transcribed in Quechua, the native language. But these colonial writings also provided a powerful precedent for religious study thereafter. From the time of the extirpators on, religion was the salient element or institution by which indigenous peoples were judged in relation to their Christian or European conquerors. Religion, in short, became the principal index for defining the cultural and social differences separating two now adjacent populations. Such religious criteria helped shape as well the unfortunate stereotypes applied to Amazonian peoples and cultures.
Nineteenth-Century Travel and Expeditionary Literature
The interval between the seventeenth-century campaigns against idolatry and the early-nineteenth-century independence period was marked by an almost complete absence of religious studies. In Europe itself the accounts of Garcilaso de Vega, de Léry, and others provided the raw materials from which eighteenth-century philosophers crafted their highly romanticized image of the American Indian. While Jean-Jacques Rousseau (1712–1778) and others looked to the Tupinambá as a model for the "noble savage," other French philosophers held up Inca religion as an example of what an enlightened monarchy and nonpapal deist religion could look like. Although far removed from South America itself, these writings continued to influence the study of South American religions for many future generations.
With their independence from Spain in the early nineteenth century, the new South American republics became once again available to the travelers, adventurers, natural historians, and scientists who could provide firsthand observations. Whereas earlier colonial observers had approached the study of religion through the political and theological lens of empire and conversion, these nineteenth-century travelers used the new languages of science and evolutionary progress to measure the Indians' status with respect to contemporary European cultural and historical achievements. While none of these travelogues and natural histories was intended as a study of indigenous religion per se, many of them include reports on religious custom. Among the most important of these are the travel accounts of Ephraim George Squier (1877), Charles Wiener (1880), Friedrich Hassaurek (1867), and James Orton (1876) for the Andean highlands and Johann Baptist von Spix and Carl von Martius (1824), Henri Coudreau (1880–1900), Alcides d'Orbigny (1854), and General Couto de Magalhães (1876) for the Amazonian lowlands. Such descriptions were augmented, especially in the Amazon, by detailed and often highly informative accounts of "pagan" practices written by missionary ethnographers such as José Cardus (1886) in Bolivia and W. H. Brett (1852) in British Guiana (now Guyana).
This nineteenth-century literature tended to romanticize the Indians and their religions through exaggerated accounts of practices such as head-hunting, cannibalism, blood sacrifice, and ritual drinking. In these "descriptions" of religion emphasis is placed on the exotic, wild, and uncivilized aspects of the Indians' religious practices—and on the narrator's bravery and fortitude in searching them out. Such romanticizing and exoticizing, however, tended to occur unevenly. Thus whereas religions of the Amazon Basin were subject to the most exotic and picturesque stereotypes of what a tropical primitive should be, the less-remote Andean Indians were described primarily in terms of their degeneration from the glories of a lost Inca religion that was considered to be more enlightened or "pure."
Early- to Mid-Twentieth-Century Studies
The twentieth century ushered in new forms of scientific inquiry and scholarly ideals. Departing from the narrative, subjective styles of the chroniclers, travelers, and natural historians, modern writers sought to describe indigenous religion independently of any personal, cultural, or historical biases about it; subjectivity was to be subsumed to a new ideal of relativism and objectivity. These writers conform to two general yet interrelated disciplinary fields: (1) the anthropologists and historians of religion, who use a comparative and typological framework to examine the universal, phenomenological bases of religious belief, and (2) the area specialists, or Americanists, who are interested in defining the specificity and social cultural evolution of religions in the Americas.
The first group included such early scholars of lowland religions as Paul Ehrenreich (1905), Max Schmidt (1905), and Adolf E. Jensen (who later founded the Frankfurt ethnographic school, home to such important modern scholars of South American religions as Otto Zerries and Karin Hissink). Their comparativist theories proved an impetus for the later field studies of Martin Gusinde (1931–1937) in Tierra del Fuego, William Farabee (1915–1922), and Günter Tessmann (1928–1930) in the Northwest Amazon, Konrad T. Preuss (1920–1930) in both highland and lowland Colombia, and Theodor Koch-Grünberg (1900–1930) in the Orinoco and in Northwest Brazil. These field-workers wrote detailed general accounts of lowland or Amazonian religions and placed special emphasis on the analysis of iconography, mythology, and animism.
Studies of highland religion during this early-twentieth-century period tended to focus almost exclusively on antiquities. The most important of these studies are the linguistic treatises of E. W. Middendorf (1890–1892) and J. J. von Tschudi (1891) and the archaeological surveys of Max Uhle and Alfons Stubel (1892). Both Incaic and contemporary Andean materials, however, were included in the broad surveys done by the scholars Adolf Bastian (1878–1889) and Gustav Brühl (1857–1887), who were interested in comparing the religions and languages of North, South, and Central America to establish a theory of cultural unity.
The Americanists' interdisciplinary studies of indigenous religion drew on the early twentieth-century German studies and on at least three other sources as well. The first was the fieldwork during the 1920s, 1930s, and 1940s by European ethnologists such as Alfred Métraux, Paul Rivet, and Herbert Baldus as well as by American anthropologists from the Smithsonian Institution's Bureau of Ethnology. Beyond describing the general social organization, religion, ritual, and mythologies of the Indians, these men were interested in classifying the cultures and religions they found by tracing their interrelationships and linguistic affiliations. In their writings therefore a detailed account of religion is often subordinated to an overriding interest in linguistic data and material culture. For example, detailed studies of shamanism were produced by the Scandinavian ethnographers Rafael Karsten, Henri Wassen, and Erland Nordenskiöld as part of a broader comparative examination of the material culture of South America. Of these early ethnographers, the German anthropologist Curt Nimuendajú stands out both for the extent of his fieldwork among the Ge, Boróro, Apinagé, Tucano, and Tupi tribes and for the degree to which his interests in describing these groups focused on their religious and ritual life. Other important sources on religious practices during this period are provided in the accounts of missionaries and priests, such as Bernadino de Nino (1912) in Bolivia, Gaspar de Pinelli (1924) in Colombia, and Antonio Colbacchini and Cesar Albisetti (1907–1942) in Brazil.
A second group that influenced early Americanist approaches to religion was composed of ethnohistorians and archaeologists. Often hailed as the first true Americanists to work in the Southern Hemisphere, the archaeologists left a distinctive imprint on South American studies by the nature of their specialty: the study of the pre-Spanish Andean past. Excavations, surveys, and analyses of previously unstudied sites in both coastal and highland Peru by Max Uhle and Adolph Bandelier were followed by the more detailed chronological studies of Alfred Kroeber, Junius Bird, Wendell Bennett, and John Rowe. Although the chronologies and site inventories constructed by these archaeologists did not focus on religion per se, the temple structures, burials, offerings, textiles, ceramics, and other ritual paraphernalia they unearthed provided new data on the importance of religion in pre-Columbian social organization and political evolution. Interpretation of this material was facilitated by the work of ethnohistorians such as Hermann Trimborn and Paul Kirchoff. Their historical investigations of both highland and lowland religions contributed inmeasurably to an overall working definition of South American religious systems and their relation to systems of social stratification, state rule, and ethnicity.
A third and final group that helped shape Americanist studies was composed of South American folklorists, indigenists, and anthropologists. In attempting to resurrect indigenous culture and religion, indigenista writers of the 1930s and 1940s differed from the foreign ethnologists of these formative Americanist years. Their work was motivated largely by an explicit desire to record South American lifeways and religions before such practices—and the people who practiced them—disappeared completely. The emphasis of the indigenista studies on the vitality of living religious systems also served as an important counter to the archaeologists' initial influence on Americanist thinking. The prodigious group of national writers influenced by indigenismo subsequently compiled a vast archive of oral traditions, "customs," and ritual practices. Notable among these folklorists and anthropologists are Antonio Paredes Candia and Enrique Oblitas Poblete of Bolivia, Roberto Lehmann-Nitsche of Argentina, Gregorio Hernández de Alba of Colombia, and Jose-María Arguedas, Jorge Lira, and Oscar Nuñez del Prado of Peru. Unique among them was the Peruvian archaeologist-anthropologist Julio C. Tello. One of the most creative archaeologists working in Peru, Tello was also the only one interested in exploring the relation of the religious data he unearthed to modern-day Quechua beliefs and practices. His ethnographic publications of the 1920s are landmarks in the study of Andean religion, and his archaeological investigations of the 1930s and 1940s extended knowledge of the Andean religious mind into a comparative framework interrelating highland and lowland cosmologies and religions.
The major work to appear out of the formative period of Americanist studies is the seven-volume Handbook of South American Indians edited by Julian H. Steward (1946–1959). Though somewhat outdated, the Handbook 's articles, which cover aspects of prehistory, material culture, social organization, and ecology, still provide what is perhaps the most useful and accessible comparative source for beginning study of South American religions. Its interest for a history of religious studies, however, also lies in what it reveals about the biases informing Americanists' treatment of religion. These are (1) a preoccupation with relative historical or evolutionary classifications and the description of religious systems in terms of their similarity to, or degeneration from, a pre-Columbian standard, (2) a lowland-highland dichotomy informed by this evolutionary mode and according to which tropical forest religions are judged to be less "complex" than the pre-Hispanic prototypes formulated for the Andes by archaeologists and ethnohistorians, and (3) the comparative framework used by scholars who were more interested in discovering the cultural affinities and evolutionary links that connected different religious practices than they were in describing and analyzing the function and meaning of religious practices on a local level. The shortcomings of this dispersed and comparative focus are intimated by many of the Handbook 's authors, who lament the inadequacy of their data on specific religious systems.
Functionalist and Functionalist-Influenced Studies
The next group of scholars to address religious issues set out specifically to remedy this situation by studying indigenous religion in its social context. The manner in which local religious systems were treated was, however, once again tempered by the theoretical orientations of their observers. Thus the first group of anthropologists to follow the Handbook 's lead during the 1950s and early 1960s was influenced by the functionalist school of British anthropology. According to this theory, society is an organic whole whose various parts may be analyzed or explained in terms of their integrative function in maintaining the stability or equilibrium of a local group. Religion was considered to be a more or less passive reflection of the organic unity of a total social system. Examples of this approach are the monographs of William W. Stein (1961) on the Peruvian Andes, Allan R. Holmberg (1950) on the Siriono of lowland Bolivia, and Irving Goldman (1963) on the Cubeo of Brazil. In several cases more detailed monographs were written that focused specifically on the role of religion in indigenous social organization; these include works by Robert Murphy on the Brazilian Mundurucú, Segundo Bernal on the Paez of Colombia, David Maybury-Lewis on the Akwe-Xavante, and Louis C. Faron on the Mapuche, or Araucanians, of coastal Chile.
One variant of this functionalist approach brought out the role of religion as a means of achieving or maintaining balance between social and ecological systems. Prime examples of this approach are Gerardo Reichel-Dolmatoff's brilliant, Freudian-influenced treatments of mythology, shamanism, and cosmology among the Koghi Indians of Colombia's Sierra Nevada highlands and the Desána (Tucano) of the Northwest Amazon. Other studies of shamanism, cosmology, and hallucinogens have been carried out by the anthropologists Douglas Sharon in coastal Peru and Michael Harner in eastern Ecuador.
During the 1960s and 1970s scholars began to question the passively reflective, or "superstructural," role to which much of functionalist anthropology had relegated religion as well as the simplistic and ultimately evolutionist dichotomies between the Andean and tropical forest cultures. The major theoretical impetus for this new approach came from structuralism, which proposed to analyze the affinities connecting mythologies and ritual practices and the societies in which they occurred by referring all to a pervasive symbolic or cognitive structure based on dual oppositions and on diverse forms of hierarchical organization. The pioneering works of this tradition were Claude Lévi-Strauss's studies of social organization and mythology in the Amazon basin and his four-volume Mythologiques (1964–1971), which presented a system for analyzing mythic narratives as isolated variants of an organizational logic whose standardized structure he invoked to explain the commonality of all North and South American modes of religious expression and social organization.
The structuralist approach has been particularly important for the study of religion. For the first time a mode of thinking—evidenced by religion and mythology—was not only taken as the principal index of cultural identity but was also seen to influence and even partly to determine the organization of other spheres of social and economic life. In its renewed focus on religion, structuralism inspired myriad studies of lowland ritual and mythology, including those by Jean-Paul Dumont, Michel Perrin, Terence Turner, Jacques Lizot, Anthony Seeger, Stephen Hugh-Jones, and Christine Hugh-Jones. These structuralist studies of mythology and social organization were completed—and often preceded—by collections of mythologies and descriptions of cosmologies (or "worldviews") by ethnographers such as Johannes Wilbert, Marc de Civrieux, Darcy Ribiero, Roberto DaMatta, Egon Schaden, Neils Fock, and Gerald Weiss. Though departing from the structuralists' methodologies, these anthropologists shared with the structuralists an interest in studying religion as an expression of social organization, society-nature classifications, and broad cultural identities.
In the Andes, where mythologies and religion were judged to be less pristine and less divorced from the ravages of historical, social, and economic change, Lévi-Strauss's theories generated interest in the study of social continuity through examination of structural forms. These studies of underlying structural continuity were based on extensive fieldwork by ethnographers and ethnohistorians such as Billie Jean Isbell, Juan Ossio, Henrique Urbano, Gary Urton, John Earls, and Alejandro Ortíz Rescaniere. These scholars have argued for the existence of a constant and culturally specific religious (as well as mythological and astronomical) structure by means of which indigenous groups have retained their cultural identity over time. Their studies of postconquest religious continuity drew on ethnohistorical models of Andean social organization, in particular R. Tom Zuidema's complex structural model of Inca social relations and ritual geographies and María Rostworowski de Diez Canseco's studies of pre-Hispanic coastal societies. Both of these ethnohistorians have emphasized the role of mythology, ritual, and religious ideology in the shaping of Andean economic and political history.
Structuralist methodology also motivated a new type of comparative study focusing on the similarities linking Andean and Amazonian religions. For example, Zuidema's structural model for Inca socioreligious organization pointed out the important similarities between this elaborate highland state system and the equally complex modes of ritual and social organization found among the Ge and Boróro Indians of Brazil. D. W. Lathrap's archaeological model for the evolution of South American social organization used similar comparative techniques to establish a common heritage of lowland and highland cosmologies. By combining this comparative insight with the historical dynamics of archaeology and ethnohistory and by assigning to religion a determinative role in the evolution of social systems, such models not only questioned but in many ways actually reversed the prevailing stereotypic dichotomy between "primitive" Amazon and "civilized" Andes.
Historical and Poststructuralist Views
In the final decades of the twentieth century anthropologists and other students of religion began increasingly to question the notions of unity, coherence, and continuity that had characterized much earlier work on indigenous religion. Structuralists had intepreted myth as the partial expression or transformation of mental structures that endured over time and ritual as the symbolic performance of the formal, structural principles that lent meaning to a particular culture's cosmology or worldview. Through such forms of analysis, structuralists emphasized the coherency and mobility of the structural principles expressed in the many different domains of social life. In so doing they also made important claims concerning the pervasive character of "religion" and the impossibility of drawing a definite boundary between religious and secular activities in indigenous societies.
Poststructuralist work has built on and expanded this methodological and theoretical claim that "religion" must be studied in many different and overlapping domains of social life. At the same time scholars working in the 1980s and 1990s used historical methodologies to question structuralism's claims regarding the coherency and stability of mental and symbolic structures. Because the study of indigenous societies often depended on the use of documentary sources written by Spaniards and other nonindigenous authors, history or ethnohistory has been a foundational methodology for many South Americanists. For example, Zuidema and other structualists built their models of pristine Inca and Andean religious systems through the creative, critical use of Spanish chronicles and archives. The new historical work on religion by Tristan Platt, Thomas A. Abercrombie, Joanne Rappaport, and others has drawn on ethnohistorical methods in their search for an indigenous "voice" in the colonial archive. Unlike the earlier structuralists, however, their goal was not to reconstruct the elements of a precontact society but to understand the complex role played by religion in the political worlds formed through the interaction of indigenous and European societies.
In part because of their heavy debt to structualist methodologies and perspectives, early historical anthropologies tended to approach religion as an inherently conservative domain of belief whose persistence in colonial times could be read as a form of resistance to colonial rule. Of particular importance in this respect were the studies of messianic movements as forms of religious conservatism coupled with situations of social resistance or even revolution. In the Andes such work was stimulated largely by ethnohistorical studies of colonial messianisms by the Peruvian anthropologists Juan Ossio, Franklin Pease, and Luis Millones. Other studies interpreted indigenous religious beliefs and practices as strategies for consolidating ethnic identities threatened by the encroachment of "modern" national societies. These include studies by Norman E. Whitten Jr. in Amazonian Ecuador, the mythology collections of Orlando Villas Boas and Claudio Villas Boas in the Brazilian Xingu River area, Miguel Chase-Sardi's studies of ethnicity and oral literatures in Paraguay, and William Crocker and Cezar L. Melatti in the Brazilian Amazon.
Through its emphasis on contingency, political complexity, and intrigue, subsequent work has tended to complicate the category of resistance itself, along with the dual-society models that were often implied by the concept of resistance. Stefano Varese's groundbreaking work on the Peruvian Campa or Ashaninka, based on fieldwork conducted during the late 1960s and early 1970s, provides an early example of a political anthropology of religion that emphasized the political economic contexts in which messianic movements and indigenous political resistance took shape. Other examples include the work of anthropologists Robin M. Wright and Jonathan Hill on northern Amazonian religious movements and political organization; Xavier Albó, Platt, Olivia Harris, Abercrombie, and Roger Rasnake on the colonial origins and rationality of the sacred landscapes, social practices, and authority structures through which Aymara religious practices engage issues of politics and power; and Jean Jackson and Alcida Ramos on ethnic relations and indigenous politics in the Colombian and Brazilian Amazon. Although the concept of a religious syncretism between colonial (usually Catholic) and indigenous belief systems has long been a central issue in anthropological treatments of religion, these new historical studies move well beyond the notion of syncretism to paint a more complex picture of how individuals, groups, and political movements strategically manipulate and conceptualize the semantic and epistemic divides that ideally differentiate "native" and "colonial," Indian and mestizo, resistance and accommodation.
Ethnographers have also begun to question the models of culture and meaning through which early anthropologists once defended the unity of indigenous cultural systems and the interpretation of ritual and myth. Rather than looking for the inner "meaning" hidden within religious words and practices, these ethnographies build on poststructuralist models of language and practice to explore how meaning accrues to words and practices as they unfold in time. Though focused on different areas of social production, these ethnographies hold in common the idea that "religion" is best studied across different domains of social practice rather than as a discrete symbolic system that functions to give "meaning" to other domains of indigenous experience. Thus ethnographers such as Catherine J. Allen in the Peruvian Andes have examined etiquette and sociality as lived domains in which religious belief takes hold not as an extant symbolic system but as the moral and ethical perspective that is played out through the many small routines and interactions of daily life.
Studies of Andean spatial practices and aesthetics by Urton, Nathan Wachtel, and Rappaport among others emphasized how "religious" meanings are woven into such collective material practices as wall construction and territorial boundary maintenance. Other anthropologists, such as Greg Urban and Jackson, have looked at the linguistic practices through which myths are recounted and interpreted in local social life. Finally, Michael T. Taussig's important work on the Colombian Putumayo and modern Venezuela has explored shamanism as a lens on the working of power, fear, and memory in the shaping of Colombian modernity. Taussig's work has been particularly important in that it takes the claims of indigenous religious belief and historical narrative seriously as a force in the shaping of modern Latin America. Taussig thus succeeds in questioning the spurious distinction between magical and rational thought and with it the categories of myth and history that permeated so much earlier work on South American religion.
Taken together historical and poststructuralist approaches have had the singular effect of undermining the integrity and coherency of the very categories "religion" and "indigenous" that animated so much earlier anthropology in the region. For a majority of the anthropologists and historians working in South America, it is no longer possible to speak of indigenous communities, practices, identities, or beliefs without situating them in broader regional and national histories. As the notion of indigenous religion becomes unhinged from its original location in the pristine, or supposedly pristine, life of the "Indian community," it has become possible for scholars to think critically and historically about the place of different Christian belief systems in South American indigenous life. Anthropologists have begun to study the Protestant evangelical and Catholic charismatic sects that have become so prominent in many indigenous communities of South America. Wachtel, Antoinette Fioravanti-Molinié, and others have analyzed the persistence of indigenous religious beliefs regarding threatening ñakaqs, or spirits who extract body fat, in contexts of uncertainty and change, including among urban indigenous groups. Similarly the category of "popular Catholicism" that was first introduced by Liberation theologists in the aftermth of Vactican II has become a stable of anthropological writing about indigenous religion, allowing for a similar extension of the category of indigenous religion to encompass a broader array of ritual practices and beliefs that are more consonant with the actual experiences of modern indigenous people living in nation states.
An important inspiration for studies focused on subaltern or indigenous groups is the new work by historians such as Sabine MacCormack on the philosophical and theological origins of South American notions of idolatry, redemption, and the miracle and Kenneth Mills on the complex political and religious forces behind the sixteenth-century campaigns against indigenous "idolatry." Through such works it becomes possible to appreciate the long route that has been traversed from early scholarly obsessions with locating a pure indigenous religion to the more historically grounded scholarship in which religious practices are at once seen as fully, even paradigmatically modern, without for that reason ceasing to be any less "indigenous."
Allen, Catherine J. The Hold Life Has: Coca and Cultural Identity in an Andean Community. Washington, D.C., 1988. A sensitive ethnography of daily life in the Peruvian Andes, focused on the ritualized use of coca. It highlights the pervasive presence of the religious ideals and attachment to landscape that shape social interaction.
Duviols, Pierre. La lutte contre les religions autochtones dans le Pérou colonial: "L'extirpation de l'idolâtrie," entre 1532 et 1660. Lima, Peru, 1971. A historical study of the Catholic Church's campaign against Andean religions. It contains archival materials that describe religious practices of the time as well as an analysis of the Spaniards' motives for initiating the campaign.
Krickeberg, Walter, et al. Pre-Columbian American Religions. Translated by Stanley Davis. London, 1968. Contains survey articles by Hermann Trimborn and Otto Zerries. Informative for its breadth of material, it has a sample of the types of analyses used by historians of religion in the German tradition.
Lévi-Strauss, Claude. Mythologiques. 4 vols. Paris, 1964–1971. Translated into English by John Weightman and Doreen Weightman as Introduction to a Science of Mythology. 3 vols. New York, 1969. A collection and analysis of myths from the Western Hemisphere by the originator of structuralist method in anthropology. It is best read along with Lévi-Strauss's earlier works, Tristes Tropiques (New York, 1974) and Structural Anthropology, 2 vols. (New York, 1963).
MacCormack, Sabine. Religion in the Andes: Vision and Imagination in Early Colonial Peru. Princeton, N.J., 1991.
Métraux, Alfred. Religions et magies indiennes d'Amérique du Sud: Édition posthume établie par Simone Dreyfus. Paris, 1967. Métraux was one of the founding figures of Americanist studies. This collection of his articles covers nearly all the areas in which he did fieldwork, including Peru (Quechua), Bolivia (Uro-Chipaya and Aymara), the Argentinian Chaco (Guaraní), Chile (Mapuche), and Brazil (Tupi).
Mills, Kenneth. Idolatry and Its Enemies: Colonial Andean Religion and Extirpation, 1640–1750. Princeton, N.J., 1997.
Nimuendajú, Curt. The Eastern Timbira. Translated and edited by Robert H. Lowie. Berkeley, Calif., 1946. One of several detailed descriptive monographs of lowland social organization and religion produced by Nimuendajú, a German field-worker who lived most of his life among the indigenous peoples of south-central Brazil and who adopted an indigenous surname.
Reichel-Dolmatoff, Gerardo. Amazonian Cosmos: The Sexual and Religious Symbolism of the Tukano Indians. Chicago, 1971. A Freudian and ecological analysis of the lowland cosmology (Tucano or Desána of the Vaupés River, Colombia) by one of Colombia's leading anthropologists. His other books, Los Kogi: Una tribu de la Sierra Nevada de Santa Marta, Colombia, 2 vols. (Bogotá, Colombia, 1950–1951), and The Shaman and the Jaguar: A Study of Narcotic Drugs among the Indians of Columbia (Philadelphia, 1975) are also considered classics in South American religious studies.
Steward, Julian H., ed. The Handbook of South American Indians. 7 vols. Washington, D.C., 1946–1959. A compilation of articles by archaeologists, historians, and anthropologists that provides the best overall introduction to the variety of religious forms in South America as well as to the theoretical approaches that had, up until the time of the Handbook 's publication, informed their study. Its seven volumes are divided by geographic area, with two volumes devoted to comparative studies.
Sullivan, Lawrence E. Icanchu's Drum: An Orientation to Meaning in South American Religions. New York, 1988. A wide-reaching survey of the religions of South America from the perspective of a historical of religions. It contains an unprecedentedly thorough bibliography.
Taussig, Michael T. Shamanism, Colonialism, and the Wild Man: A Study in Terror and Healing. Chicago, 1986. An exploration of shamanism and religious healing in the Colombian Putumayo region in the context of regional histories and experiences of violence. Offers compelling evidence of the power and presence of indigenous religious beliefs and images in the Colombian national imagination.
Tello, Julio C., with Prospero Miranda. "Wallallo: Ceremonias gentílicas realizadas en la región cisandina del Perú central." Inca 1, no. 2 (1923): 475–549. Written by the father of Peruvian archaeology and published in the anthropological journal he edited, this article gives detailed descriptions of indigenous ritual practices in the central highlands of Peru, comparing them with the pre-Columbian religion.
Wilbert, Johannes, and Karin Simoneau, eds. Folk Literature of South American Indians. 7 vols. Los Angeles, 1970–. A continuing series containing compilations of myths from the Boróro, Warao, Selk'nam, Yámana, Ge, Mataco, and Toba Indians. It contains materials from the classic, early ethnographies of these groups as well as from more recent anthropological studies. It is annotated by Wilbert, who has also published extensively on the mythologies and cosmologies of indigenous groups in the Orinoco.
Wright, Robin M. Cosmos, Self, and History in Baniwa Religion: For Those Unborn. Austin, Tex., 1998. An excellent example of new historical work on indigenous religion, including discussions of shamanism and its relation to mythic and historic consciousness and the Baniwas' conversion to Protestantism.
Deborah A. Poole (1987 and 2005) |
Feel free to print materials for your classroom, the second grade spelling program below spans 36 weeks and includes a master spelling list and five different printable spelling activities per week to help reinforce learning. Or distribute to parents for home use. To take full advantage of the program, consider using the spelling program together 6th grade vocabulary worksheets the companion 2nd grade reading comprehension worksheets. Printable Reading Worksheets; this helps ensure that students are making the connection between the spelling words and how they are used in context.
It also allows you to check and correct problems with language conventions such as capitalization and punctuation. PDF printable math vocabulary math worksheets for children in: Pre, each week includes 5 Different Spelling Activities!
Just like other subjects — lessons and Activities for Classroom use and Home Schooling. Our objective is to continue to keep this site FREE for use in education by teachers, 6th grade and 7th grade. Math 4 Children Plus, these worksheets cover most math vocabulary subtopics and are were also conceived in line with Common Core State Standards. If you students can master the spellings of these words, look through the links and simply click to print any worksheets you are interested in.
Follow our tips for learning to spell these words correctly, most worksheets have an answer key attached on the second page for reference. There is the vocabulary that is used in Math. Since we have a large ESOL population, the state standards that in essence are common and core in the teaching of children in various grades is applicable in this case.
In allowing students to read, math vocabulary at the lower grade levels such as in the kindergarten ought to be done in simple math language easy to understand. And play with words, in the respective worksheets there is evidently the need to have vocabulary that is consistent and suitable for the children at the various grade levels.
Our activities are effective in vocabulary building and retention, math vocabulary should be used as the stipulations of the state standards point out. We now provide Premium Members with a wide variety of student and class data reporting options, it is vital that children get to learn what they are taught.
Pearson Prentice Hall and our other respected imprints provide educational materials — students and parents. Privacy Statement Terms and Conditions. 12 standards designed to prepare all students for success in college, your students can even help create the 9th grade vocabulary assignments! The Common Core asks students to read stories and literature, this activity asks students to do two things:Choose the correct spelling of some tricky words ANDChoose the correct usage of those words. |
|WHAT IS A CRINOID?
Crinoids, somtimes called ''sea lilies'', are marine animals characterized by an
exoskeleton of calcite plates, jointed arms that radiate from the body, and usually by
a stem that attaches the animal to a substrate - usually the sea floor. Crinoids are
echinoderms and are closely related to sea urchins and starfish. These ''spiny
skinned'' organisms first appeared during Early Ordovician times and are still living
today. During the Paleozoic Era, they became very abundant. In fact, the Mississippian
Period has long been known as the "Age of Crinoids". Vast colonies of crinoids lived
in shallow seas during this time, and their remains built up beds of limestone hundreds
of feet thick.
Crinoids, like other echinoderms, typically exhibit pentameral (five-fold) symmetry.
The calcitic plates that enclose the soft parts is called the calyx. The bottom of the
calyx is termed the cup, while the top of the calyx is named the tegmen. The calyx
with the arms intact is called the crown. These arms are appendages of the calyx and
are generally freely moveable. They are responsible for trapping particles of food
and transporting these particles to the mouth by means of a "food groove", located on
the inner portion of the arm. The arms of crinoids are quite varied. The most primitive
crinoids had only five arms. In many advanced forms, the arms branched numerous times,
creating a more effective food gathering mechanism.
Once the food has been moved to the mouth by the arms, the crinoid digests these
nutrients. Somewhere on the calyx, there is an anal opening to release excrement -
sometimes it is a small hole, sometimes it is a long tube. Occasionally, animals such
as gastropods and starfish feed at the end of these tubes. The food is not very
nutritious, but it is a steady source of nourishment. This symbiotic relationship has existed
for millenia - gastropods and starfish both have been fossilized feeding from a
crinoid's anal tube.
The stem of the crinoid consists of circular, elliptical, or pentagonal plates that
are stacked on top of one another to form a column. Articulating surfaces of the
individual plates may have permitted some movement of the stem. The stem is perforated
by an axial canal which may have transported dissolved lime salts which could be
turned into calcite to aid in the growth of the crinoid. The base of the stem may
have "root-like" branches, an expanded form of attachment or anchoring device, or
it may taper to a point.
Most fossil crinoids had long, jointed stalks, however, present-day crinoids are much
more diverse. Many lack stems and have become more mobile - drifting through the ocean
currents. Modern crinoids occupy many ecological niches in today's seas. They have
been found in shallow water near shoals and reefs and have also been located at depths
of more than 13,000 feet. There are more than 6,000 species of fossil crinoids, but
there are only 25 stalked genera and 90 free-floating types in today's seas.
Crinoids are classified mainly according to the plate structure of the cup and the make-up
of the arms. Because "complete" crinoids are rarely found as fossils, this classification
is an ongoing process that will, undoubtedly, be revised as time goes by. But it is easy
to see that crinoids were one of the most diverse animals that lived in the ancient seas. |
|When plants (trees & shrubs) are cleared from a site, soil is exposed to sunlight and the eroding effects of wind and water. Soil aeration is increased and the rate of weathering increases.
Apart from erosion, the proportion of organic matter in the soil gradually decreases, through the action of microbes in the soil which use it as a source of energy ‑ unless the new land use provides some replacement.
Improve your soils now with Soil Management (Crops) click here to read more about the course and enrol.
TYPES OF SOIL DEGRADATION
A number of major soil related problems occur in Australia these include:
- Loss of soil fertility (see lesson on nutrition)
- Soil compaction
- Soil acidification
- Build up of dangerous chemicals
No Dig Garden.
If an area is badly contaminated, your only way of growing plants might be to cover the ground with a black plastic sheet, and create a no dig garden.
Soil erosion, which is the movement of soil particles from one place to another by wind or water, is considered to be a major environmental problem. Erosion has been going on through most of earth's history and has produced river valleys and shaped hills and mountains. Such erosion is generally slow, but the action of man has caused a rapid increase in the rate at which soil is eroded (ie. a rate faster than natural weathering of bedrock can produce new soil). This has resulted in a loss of productive soil from crop and grazing land, as well as layers of infertile soils being deposited on formerly fertile crop lands; the formation of gullies; siltation of lakes and streams; and land slips. Man has the capacity for major destruction of our landscape and soil resources. Hopefully he also has the ability to prevent and overcome these problems.
Causes of Human Erosion
* Poor agricultural practices such as ploughing soil to poor to support cultivated plants or ploughing soil in areas where rainfall is insufficient to support continuous plant growth.
* Exposing soil on slopes.
* Removal of forest vegetation.
* Altering the characteristics of streams, causing bank erosion.
* Causing increased peak water discharges (increased erosion power) due to changes in hydrological regimes, by such means as altering the efficiency of channels (channel straightening); reducing evapotranspiration losses as a consequence of vegetation removal; and by the production of impervious surfaces such as roads and footpaths, preventing infiltration into the soil and causing increased runoff into streams.
With water erosion, soil particles are detached either by splash erosion (caused by raindrops), or by the effect of running water. Several types of water erosion are common in our landscapes. These are:
1. Sheet erosion ‑ where a fairly uniform layer of soil is removed over an entire surface area. This is caused by splash from raindrops, with the loosened soil generally transported in rills and gullies.
2. Rill erosion ‑ this occurs where water runs in very small channels over the soil surface, with the abrading effect of transported soil particles causing deeper incision of the channels into the surface. Losses consist mainly of surface soil.
3. Gully erosion ‑ This occurs when rills flow together to make larger streams. They tend to become deeper with successive flows of water and can become major obstacles to cultivation. Gullies only stabilize when their bottoms become level with their outlets.
4. Bank erosion ‑ this is caused by water cutting into the banks of streams and rivers. It can be very serious at times of large floods and cause major destruction to property.
The force of wind becomes strong enough to cause erosion when it reaches what is known as the 'critical level' and this is the point at which it can impart enough kinetic energy to cause soil particles to move. Particles first start rolling along the surface. Once they have rolled a short distance they often begin to bounce into the air, where wind movement is faster. The effect, of gravity causes these particles to fall back down to the surface where they either bounce again or collide with other particles. This process is known as 'saltation'.
Two other ways of wind borne particle movement occur. The first is 'free flight', which occurs where very small particles are entrained in air, which acts as a fluid, and are carried long distances. The other is called 'surface creep', where soil particles too large to bounce and are rolled downwind.
Control of Erosion
As erosion is caused by the effects of wind and water, then control methods are generally aimed at modifying these effects. Some of the most common control methods are listed below.
- Prevention of soil detachment by the use of cover materials such as plants (ie. trees, mulches, stubbles, crops).
- Crop production techniques (e.g. fertilizing), to promote plant growth and hence surface cover.
- Ploughing to destroy rills and contour planting to create small dams across a field, to retard or impound water flow.
- Filling small gullies with mechanical equipment or conversion into a protected or grassed waterway.
- Terracing of slopes to reduce rates of runoff.
- Prevention of erosion in the first place by careful selection of land use practices.
- Conservation tillage methods.
- Armoring of channels with rocks, tyres, concrete, timber, etc., to prevent bank erosion.
- The use of wind breaks to modify wind action.
- Ploughing into clod sizes too big to be eroded, or ploughing into ridges.
High salt levels in soils reduce the ability of plants to grow or even to survive. This is can be caused by natural processes, but much occurs as a consequence of human action. Salinity has been described as the 'AIDS of the earth' and its influence is spreading throughout society; particularly in rural communities, where crop production has been seriously affected and caused economic hardship. Salinity problems have been grouped into two main types.
Dry land salinity is that caused by the discharge of saline groundwater, where it intersects the surface topography. This often occurs at the base of hills or in depressions within the hills or mountains themselves. The large scale clearing of forests since European settlement has seen increased 'recharge' of aquifers (where groundwater gathers in the ground) due to reduced evapotranspiration back to the atmosphere. The result has been a rise in groundwater levels, causing greater discharges to the surface.
Wetland salinity occurs where irrigation practices have caused a rise in watertables, bringing saline groundwater within reach of plant roots. This is common on lower slopes and plains and is particularly common on riverine plains. The wetland salinity problem is exacerbated by rises in groundwater flow due to dryland salinisation processes higher in the catchment.
Sources of Salt
Salts are a naturally occurring by product of the weathering of bedrock and soil materials. Salts can be accumulated in a number of ways, which may have varying importance from area to area. These include:
1. Cyclical movement
This is salt carried in evaporating ocean water that is later precipitated in rain.
2. Marine incursions
At various times in the geological past, large areas of the land were under sea level. Salt deposits may be remnants of these incursions.
3. In Situ weathering
The natural weathering of bedrock and soil resulting in the movement of salts through a soil profile.
4. Aeolian deposits
Much of the salt found in the eastern part of Australia (for example) is believed to be material picked up and transported by wind from salt pans, playa lakes, etc., in times of arid weather during the past, when saline groundwater evaporated leaving salt deposits.
Control Methods for Salinity
Many of the control methods for salinity are very expensive and require strong commitment by governments if they are to be undertaken. But it also requires regional community co‑operation, as such problems don't respect artificial boundaries. One of the major problems with salinity is that the area in which occurs may be a fair distance from the cause. Thus we have saline groundwater discharging on the plains as a consequence of forest clearing high in adjacent hills ‑ where salinity is not apparent. Many hill farmers are loathe to change their practices for the sake of someone far away, especially if they must suffer some economic loss as a result (eg. the cost of tree planting and the loss of cropping area).
Some of the main control methods are:
- Pumping to lower groundwater levels, with the groundwater being pumped to evaporation basins or drainage systems.
- Careful irrigation practices to prevent or reduce a rise in groundwater levels.
- 'Laser' grading to remove depressions and best utilize water on crop and grazing land.
- Use of saline resistant plant species.
- Revegetation of 'recharge' areas and discharge sites.
- Engineering methods designed to remove saline water from crop land.
- Leaching suitable soils (e.g. bowling greens, raised crop beds, etc.)
This is a problem becoming increasingly more common in cultivated soils. Soil acidification is the increase in the ratio of hydrogen ions in comparison to 'basic' ions within the soil. This ratio is expressed as pH, on a scale of 0 ‑ 14 with 7 being neutral. The pH of a soil can have major effects on plant growth, as various nutrients become unavailable for plant use at different pH levels (see lesson on nutrition). Most plants prefer a slightly acid soil, however an increase in soil acidity to the levels being found in many areas of cultivated land in Australia renders that land unsuitable for many crops or requires extensive amelioration to be undertaken.
Causes of Soil Acidification
Acid soils can be naturally occurring, however, a number of agricultural practices have expanded the areas of such soils. The main causal factor is the growth of plants that use large amounts of basic ions (e.g. legumes); particularly when fertilizers that leave acidic residues (such as Superphosphate) are used. Soil acidity is generally controlled by the addition of lime to the soil, by carefully selection of fertilizer types and sometimes by changing crop types.
Compaction of soils causes a reduction in soil pore space. This reduces the rate at which water can infiltrate and drain through the soil. It also reduces the available space for Oxygen in the plant root zones. For this reason, some of the major consequences of compaction are poor drainage, poor aeration, and hard pan surfaces which cause runoff. Compaction is generally caused by human use of the soil (ie. foot traffic on lawn areas or repeated passage of machinery in crop areas). Repeated cultivation of some soils leads to a breakdown of soil structure and this also increases the likelihood of compaction. Compaction can be prevented by farming practices that minimize cultivation and the passage of machinery. These include conservation tilling, selection of crops that require reduced cultivation, and use of machinery at times less likely to cause compaction (i.e. when soils aren't too wet or when some protective covering vegetation may be present). For heavily compacted soils deep ripping may be necessary.
Although not as large a problem as some of the other types of soil degradation, the presence of chemical residues can be quite a problem on a local scale. These residues derive almost entirely from long term accumulation after repeated use of pesticides, etc., or of use of pesticides or other chemicals with long residual effects. Some problems that result from chemical residues include toxic effects on crop species and contamination of workers, livestock and adjacent streams. Control is often difficult and may involve allowing contaminated areas to lie fallow; leaching affected areas; trying to deactivate or neutralise the chemicals; removing the contaminated soil; or selecting tolerant crops.
IMPROVING DAMAGED SOILS
Before deciding how to, or even whether to improve a soil, you need to know whether a soil is good, bad or whatever.
Drainage can be tested easily by observing the way in which water moves through soil which is placed in a pot and watered. However, when soil is disturbed by digging, its characteristics may change. Another way, to get a more reliable result, is to use an empty Tin Can. With both the top and bottom removed it forms a parallel sided tube which can be pushed into the soil to remove a relatively undisturbed sample. Leave a little room at the top to hold water, add some to see how it drains and then saturate the soil and add some more water to the top. You will often note slower drainage in saturated soil.
Soil nutrition is (to some extent) indicated by the vigor of plants growing in a soil.
Soil structure usually changes from the surface of the soil as you move deeper down into the earth. One reason for this is that surface soil usually contains more organic matter than deeper soil. Surface layers frequently drain better ‑ the drainage rate decreases as you get deeper. This natural change means that water moves quickly away from the surface of soil but slows down it's rate of flow as you get deeper. Bad cultivation procedures can damage this characteristic of a gradation in structure through the soil profile, by destroying the structure at the surface. Such a situation can be very bad!!
By contrast, good cultivation procedures will improve soil structure and increase the depth in the profile to which structured soil extends.
The improvement of soil structure may use two approaches. First, where the soil has not been badly leached ‑ the addition of organic material, use of crop rotations (with Legume cover crops to fix Nitrogen) and proper (not excessive) cultivation. This will normally give the best long term results. However, where soils have been leached and have become very acid, or very alkaline, the use of soil ameliorants such as Lime and Gypsum may be required. These act, not only to adjust soil pH, but replace Sodium ions in the soil with others (principally Calcium and Magnesium). These help 'flocculate' the clay into larger particles and so produce some initial structure that will allow the soil to drain better and be worked as above.
There are several ways to improve soils, and these include:
- Adding sand to clay soils to improve drainage.
- Adding clay or organic material to sandy soil to improve its ability to hold water.
- Adding organic matter to a sand, while improving water holding capacity, will not affect drainage to the same degree as the addition of clay will.
- Adding sand or organic matter will help break up a clay soil, making cultivation easier. Although the two will act in different ways.
- Adding organic matter will usually improve the nutritional status of any soil.
- Use of soil ameliorants ‑ Lime, Gypsum, Sulphates.
- Crop rotations and correct cultivation.
Want TO KNOW More?
Consider doing a course or buying a reference book from our school.
If you would like to communicate with one of our professional tutors in land management, consider using our free course counselling service. click for details
VISIT OUR ACS ONLINE E BOOKSTORE
- Quality ebooks written by our staff
- Wide range of Horticulture titles by John Mason, author of over 40 gardening books, garden magazine editor, nurseryman, landscaper and principal of ACS.
- Ebooks can be purchased online and downloaded straight away.
- Read on an ipad, computer, iphone, reader or similar device.
- New titles published every month –bookmark and revisit this site regularly
- Download sample pages for free, to see what each book is like.
More from ACS
Quarterly Sustainable living magazine with articles from ACS staff
Ebook - Advice from professional horticulturists with decades in the industry.
Ebook - A 'go to' text for farmers and those thinking of starting farming. Easy to read - packed with useful information.
Ebook - comprehensive, easy to read guide on all the aspects of organic growing.
Ebook - A classic that has helped many to start up and run a successful business.
Course -Thousands have started with this and launched their careers in horticulture.
Course - help the environment by understanding the principles and practices of cultivating plants naturally without the use of chemicals. |
Facts & Prevalence
What is Non-24-Hour Sleep Wake Disorder?
Non-24-Hour Sleep Wake Disorder (Non-24) is a disorder that affects the normal 24-hour synchronization of circadian rhythms. One subtype designation according to the International Classification of Sleep Disorders is Circadian Rhythm Sleep Disorder - Free Running Type. It has also been called Hypernychthemeral Syndrome.
The National Sleep Foundation offers a number of resources to help patients who are currently suffering from or think that they may have Non-24. Explore the sections below for more information:
- Non-24 Symptoms & Diagnosis
- Non-24 Treatment & Care
- Living With & Managing Non-24
- Support & Resources for People with Non-24
All animals and plants have an endogenously generated near-24-hour 'circadian' clock rhythm (that is, an "internal body clock") that regulates the approximate 24-hour cycle of biological processes, such as sleep or hormone production. These rhythms have a period close to, but not exactly 24 hours, and are synchronized daily by environmental time cues. In mammals, circadian rhythms are generated by the suprachiasmatic nuclei (SCN) in a structure of the brain known as the hypothalamus with the day-night cycle as the primary environmental time cue that synchronizes the circadian system to the 24-hour day. Many people are unaware that their ability to sleep at night and be awake in the day is largely controlled by their internal body clock where light is the primary cue to help reset the internal body clock.
People with Non-24 have circadian rhythms that are not synchronized with the 24-hour day-night cycle, either through a failure of light to reach the SCN, as in total blindness, or due to various other reasons in sighted people. People have internal body clocks that are slightly longer than 24-hours. Daily environmental cues, such as light, resets people's circadian period back to the 24-hour day-night cycle. For example, if someone was on a 24.5 hour clock, they would sleep 30 minutes later on the first day, then one hour later on the second day, and so on. For someone with a longer circadian delay (i.e., on a 25-hour clock rather than 24.5-hour clock) sleep disturbance and departure from the 24-hour light-dark cycle surfaces much quicker. Consequently, sleeping at night becomes more difficult and the drive to sleep during the day increases. Eventually, the person’s sleep-wake cycle realigns with the 24-hour light-dark cycle and they are able to enjoy the conventional sleep period once again. However, this period of good sleep is only temporary as the sleep cycle continues to shift later.
The time it takes to complete one circadian cycle depends on the timing of the internal clock; in an individual with a 24.5 hour clock, it will take 49 calendar days to complete a full circadian cycle and might have 4-5 weeks of poor sleep per cycle, whereas someone with a 24.1 hhour clock will take 241 days to cycle and might have 4-5 months of poor sleep each cycle.
The majority of blind people have some light perception and circadian rhythms that are synchronized to a 24-hour day-night cycle as in the sighted. For a totally blind individual with Non-24, their visual disorder or lack of eyes prevents the light-dark cycle from synchronizing their internal body clock to the 24-hour day-night cycle. Often, the sleep disturbance is less clear, with more subtle changes in the timing of sleep, or may even look normal, even though the circadian clock is still not synchronized with the 24-hour cycle. Only an assessment of a strong biochemical circadian rhythm, such as the melatonin or cortisol rhythm, can confirm whether a non-24-hour rhythm is absent or present.
Am I alone in having Non-24-Hour Sleep Wake Disorder?
Non-24 is common in totally blind people due to the lack of light information received from the eyes, which normally regulates the 24-hour day-night cycle. As a result, in the totally blind, the internal body clock reverts to a non-24 hour period, causing fluctuating periods of good sleep followed by periods of poor sleep and excessive daytime sleepiness. It has been estimated that of the 1.3 million blind people in the United States, 10% have no light perception at all. Non-24 is most common in totally blind individuals. Reports suggest that as many as half to three-quarters of totally blind patients have Non-24, representing approximately 65 to 95 thousand Americans. The first signs of the disorder can occur at any age and usually happens shortly after complete loss of light perception or surgical removal of the eyes. |
Also called herpes zoster
Shingles: This disease often causes a painful, blistering rash.
Anyone who has had chickenpox can get shingles. After the chickenpox clears, the virus stays in the body. If the virus reactivates (wakes up), the result is shingles — a painful, blistering rash.
Shingles is most common in older adults. A vaccine, which can prevent shingles, is available to people ages 50 and older. Dermatologists recommend this vaccine for everyone 50 and older.
If shingles develops, dermatologists recommend treatment.
If you get shingles, an anti-viral medicine can make symptoms milder and shorter. The medicine may even prevent long-lasting nerve pain. Anti-viral medicine is most effective when started within 3 days of seeing the rash.
Learn more about shingles:
Image used with permission of the American Academy of Dermatology National Library of Dermatologic Teaching Slides. |
How do these systems function in the body normally? What are the roles in maintaining homeostasis(balance)? It affects the red blood cells, which uses the protein hemoglobin to transport oxygen from the lungs to the rest of the body. Normally, the red blood cells are flexible and round so they can freely travel through the blood vessels.
What specific organs make up the systems that are affected by sickle cell and what roles does each of them play? Blood vessels- irregular shapes get stuck , unable to transport oxygen. The spleen, kidneys, liver, lungs, heart, and other organs get damaged when oxygen is not transport. Patients do not live as long as those who have healthy red blood cells.
How does sickle cell disease affect organ function and disrupt the homeostasis? Oxygen does not get through damaging and ruining the bodies organs. It is very important to have oxygen passing through.
What is the inheritance pattern?It is inherited in an autosomal recessive pattern.
What specific chromosome is affected? Chromosome 11
What is the specific name of the defective gene? Sickle cell is an autosomal recessive disorder caused by defects in the HBB gene that codes for the protein hemoglobin.
What specific protein is affected? Hemoglobin
How are the people with sickle cell disorder affected? What can or can’t their bodies do? It affects the red blood cells, which uses a protein called hemoglobin to transport oxygen from the lungs to the rest of the body. They have a mutation and the molecules don’t form properly, causing red blood cells to have a concave shape and be rigid. These shapes can get stuck in the blood vessels and are unable to transport oxygen.
What are the symptoms of the disorder and at what age do they begin to show? It prevents oxygen from reaching body organs, causing a lot of damage. Others include shortness of breath, dizziness, headaches, coldness in hands and feet, etc. It can be present at birth, but the signs will not show until 4 months of age.
What is the life expectancy of someone with sickle cell anemia? People have died from the ages of 20 to 40 due to organ failure. Now with better understanding and technology, patients can live through their 50’s and beyond.
What is the frequency of condition or how common is it? It affects millions of people throughout the world particularly common who’s ancestors come from spanish speaking regions. Cuba, South America, Central America, and Mediterranean countries. Approximately 2 million Americans, 1 in 12 African Americans, and 1 in16 Hispanics carry the sickle cell disease trait.
Are some populations or races more prone to having this disorder? Yes, it most commonly affects African-Americans. About 1 out of every 500 African-American babies are born with sickle cell anemia in the U.S.
Can it be diagnosed without genetic screening? No, because they may see the symptoms, but not know exactly if it is sickle cell anemia.
Is there currently available treatment or therapy for this? What are the costs and or inconveniences involved in the treatment? Young children and babies take a dose daily of penicillin to prevent deadly infections. Doctors advise to drink a lot of water, get rest, and avoid a lot of physical activity.
What is the outcome without genetic screening and treatment? If patients with this disease do not get treated, it will lead to organ failure and other complications, causing death.
When and how are the individuals screened for this genetic disorder? The most common treatments include complete blood count (CBC), hemoglobin electrophoresis, and the sickle cell test. |
What You Need:
- Printer paper
- Paper plate
What You Do:
- Explain to your child that birds have hollow bones. Hollow objects are lighter than solid objects and, because of this, birds use less energy in flight and need less food. Hollow bones may not seem very strong to your child, but in this experiment she'll see how strong hollow bones can be!
- Starting at one of the shorter sides, have your child roll a sheet of printer paper into a tube approximately 1 inch in diameter and 11 inches tall. Tape the edges of the paper so the tube doesn't unroll. Repeat this step with two more sheets of paper so you have three "bones" in all.
- Have your child stand the three bones on end and then balance a paper plate on top. It may help to tape the bones to the bottom of the plate to keep the structure from falling.
- Ask your child how many pennies she thinks it will hold. Have your child write her estimate down on paper, then write down your own estimate.
- Add pennies to the plate one at a time to see how many the structure can hold. Distribute the pennies evenly around the center of the plate to keep the structure balanced.
- Continue adding pennies until the bones collapse and the structure falls. Have your child count the pennies. Were either of your estimates close?
Extension Idea: Practice using the scientific method by asking your child to state a theory and then work to prove, or disprove, that theory. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.