content
stringlengths 275
370k
|
---|
Starting a conversation with a child about online safety
The most important activity you can undertake to help a child to stay safe online is to have a conversation.
- reassure them that you're interested in their life, offline and online. Recognise that they'll be using the internet to research homework as well talking to their friends.
- ask your child to show you what they enjoy doing online or apps they’re using so you can understand them.
- be positive but also open about anything you're worried about. You could say "I think this site's really good" or "I'm a little worried about things I've seen here."
- ask them if they're worried about anything, and let them know they can come to you.
- ask them about their friends online and how they know they are who they say they are.
- listen for the reasons why your child wants to use apps or site you don't think are suitable, so you can talk about these together.
- ask your child what they think's okay for children of different ages so they feel involved in the decision making.
- Become a friend or follower on the social media account that the child has signed up to.
If you have been a victim of sexual online abuse or you're worried this is happening to someone you know, you can report it on this site.
Children and young people can be groomed online or in the real world, by a stranger or by someone they know. If you're worried about a child, the NSPCC have advice to help which can be accessed via the link below
Fortnight is an online game which has received a great deal of media attention. Below is an advice sheet to help parents to understand and limit the potential risks associated with this game.
The use of social media is widespread amongst pupils in primary schools despite a minimum age of 13 being stipulated by most providers. Below is a short leaflet which outlines the key issues.
Parents wishing to support their children in using the internet safely will find lots of very useful tips and information on this site.
Thinkuknow offer a range of information about keeping yourself or a child you know safe from child sexual exploitation.
For 5 to 7 year olds go to:
For 8 to 10 year olds go to:
For 11 to 13 year olds go to:
Below, is a very well-written magazine on the theme of online safety. If you find it of interest, you are able to subscribe to receive new issues by following this link: |
Making Vocab Connections
Word walls used to be a term heard only in elementary schools. Nowadays it is more common that teachers of all grade levels are finding them effective and worthwhile for their students to develop and reinforce content area vocabulary.
In 2011, The National Research Council (NRC) developed A Framework for K-12 Science Educationwhich identified practices of science and engineering that were essential for all students to learn. One of the practices involved students obtaining, evaluating and communicating information. It states:
Any education in science and engineering needs to develop students’ ability to read and produce domain-specific text. As such, every science or engineering lesson is in part a language lesson, particularly reading and producing the genres of texts that are intrinsic to science and engineering. (NRC Framework, 2012, p. 76)
By using a word wall in the science classroom, the teacher increases access to domain-specific text and can refer the students to it often. Teachers should have students interact with the word wall to help develop concepts and demonstrate content mastery. This can occur by adding arrows to show sequence, or group words together by categories. By interacting with the word wall often, students will start making connections between the current science concepts they are learning as well as previously learned science topics.
It’s important when making word walls that a picture or illustration is used alongside the vocabulary term to serve as a visual cue. This is especially important to your English language learners who have been shown to benefit from increased exposure to print and language.
Limited Space? Get Creative!
Sometimes wall space is limited in the classroom, forcing teachers to get creative. One of the most unused real estate in the classroom is the ceiling – start building your students vocabulary above their heads. At least when all faces are turned toward the sky, you’ll have immediate feedback that the word “ceiling” is in use.
Using the back of the classroom door or fronts of cabinets will still allow students to see and interact with the scientific terminology. However, words will need to be changed between each chapter or unit, so make it easy on yourself and place Velcro strips on surface and backs of cards for clean and easy exchange.
Download my FREE Nature of Science Word Wall to help you get started increasing your students’ use of science vocabulary.
|Click to download|
Looking for more? Check out my Physical Science Word Wall Bundle with over 240 words to use throughout the school year. |
Desiccation in woody plants is common in Colorado, because soils range from sandy to clay and plants rely almost totally on supplemental watering. In addition, nearly all woody plants found in Colorado are imported from different parts of the country and have varying moisture requirements.
Symptoms of desiccation appear when too little or too much water is available. It also occurs in plants that have weak root systems; most often in transplants. Too much water drives oxygen from the soil, which leaves roots unable to absorb water and nutrients. Symptoms of desiccation appear over the course of a day in small, newly transplanted plants. A large conifer may not show symptoms for a couple of years.
In response to desiccation, many plants will try to adapt. Established deciduous woody plants show the most obvious signs of desiccation. To conserve water, these plants often shed some or all their leaves. In the fall, early coloration is a sign of water stress. Cottonwoods, aspens and willows also drop twigs and small branches in a effort to conserve. After several years of inadequate water, the annual growth becomes weak.
Conifers are much less susceptible to desiccation. Their needles are well adapted to xeric conditions. If water stress does occur, it’s most likely to happen in spruce and fir. In these plants, you typically won’t see symptoms until the following growing season when stunted and off-color needles appear.
Desiccation in broad-leaved evergreens, such as Oregon grape and pyracantha, often appears in leaves, which brown at the margins. To protect broad-leafed evergreens, plant them in a protected area, such as the north side of a structure.
Don’t flood plants to correct water deficits. Flooding, even for a short period, can quickly kill a woody plant.
For “Fall & winter watering” refer to message number 1706.
For more information, see the following Colorado State Extension fact sheet(s). http://www.ext.colostate.edu/ptlk/2105.html |
A significant proportion of the food in the deepest ocean falls from discarded giant larvacean houses.
Up until a few years ago the food chain in the darkest depths of the ocean floor was a puzzle. The volume of nutrients sinking down from above wasn’t enough to support the amount of life found in the abyss, at least as measured by sediment traps. These are essentially big underwater funnels, pointing towards the surface, that let researchers capture and measure stuff floating down from above. The numbers from these traps just didn’t add up; they were missing something.
Enter larvaceans. These little ocean-dwelling creatures look a little like tadpoles, with a bulging head and a long tail. They’re found around the world, and it turns out they are responsible for a significant proportion of that missing food. Not the larvaceans themselves, though… the creatures of the seabed feed on their falling houses.
“House” here is a technical term for a kind of translucent shell, similar to the exterior of sea urchins (itself known as a “test” for some reason). The larvaceans construct their houses out of mucous and use them to feed. The houses are enormous traps for tiny plankton; the larvaceans beat their tails to pump water through the house and filter the food.
The larvacean house is fragile but also enormous. Giant larvaceans such as the beautifully named Bathochordaeus charon get about 6cm long, but their houses are up to a metre in diameter. Nevertheless, these houses are temporary; when the filters get clogged (for example, by seaborne plastic particles) the larvacean simply detaches from the house and builds a new one. The house itself sinks down into the depths and becomes that missing link in the deep ocean food chain.
[Thanks to Beneath the Blue] |
- Water contact
For example, in a traditional hand-excavated water well, the level at which the water stabilizes represents the water table, or the elevation in the rock where air starts to occupy the rock pores.
In most situations in the hydrocarbon industry the term is qualified as being an oil-water contact (abbreviated to "OWC") or a gas-water contact ("GWC"). Often there is also a gas-oil contact ("GOC").
In an oil or gas field, hydrocarbons migrate into rocks and can be trapped if there is a permeability barrier to prevent upward escape. Gas and oil are lighter than water, so they will form a bubble at the high end of the "trap" formed by the impermeable barriers. A simple physical model of this would be a coffee cup held upside down underwater with an air bubble occupying the highest portion of the cup's interior. The base of the bubble is the water contact.
Capillary action can obscure the true water contact in permeable media like sandstone. Capillary pressure prevents the hydrocarbons from expelling all of the water in the pores, which creates a transition zone between the fully saturated hydrocarbon levels and the fully saturated water levels.
In poorly porous intervals, the oil-water, gas-water or gas-oil contacts can similarly be obscured, which makes estimation of hydrocarbon reserves difficult. Descriptions of the well's petrophysics will then often further qualify to delineate a gas-down-to, oil-up-to, oil-down-to and water-up-to line, clearly showing the uncertainties involved.
Wikimedia Foundation. 2010. |
Plate boundaries: divergent
a) Divergent plate boundaries and seafloor-spreading
Following the Second World War, geological and geophysical oceanography applied newly developed techniques such as echo-sounding, rock dredging, and piston coring to the study of the oceans. These data, along with gravity and magnetic data were collected throughout the oceans, resulting in the discovery of a worldwide mid-ocean ridge system. Furthermore, it was recognized that these ridges were tensional, not compressional structures that could be deciphered using magnetic stripes on the sea floor (see paleomagnetism). |
Triassic Reptile Shows Example of Convergent Evolution with Dinosaurs
A team of scientists including researchers from Virginia Tech College of Science and the University of Chicago have identified a new species of Triassic Archosaur (potentially), one that shared some remarkable anatomical characteristics with its much later and very distant relatives, the bone-headed dinosaurs. The little reptile, fossils of which were excavated from Upper Triassic chalk deposits in Howard County (Texas, USA), has been named Triopticus primus, it’s skull shows a similar shape and morphology to the much later, Late Cretaceous pachycephalosaurid dinosaurs, animals that lived more than 100 million years later.
A Graphical Representation Showing Convergence Between Triassic Archosaurs and Later Archosaurs (Dinosauria)
Picture Credit: Current Biology
Similarities in body plan evolution are relatively common place in the history of animal life on our planet. For example, the wings of Pterosaurs, bats and birds are superficially similar as they are all adapted to providing powered flight. Icththyosaurs and dolphins have very similar shaped, streamlined bodies, adaptations to a nektonic marine existence. Surprisingly, the researchers identified numerous additional taxa in the fossil deposits of Howard County (Otis Chalk assemblage from the Dockum Group of Texas), that demonstrate the early acquisition of morphological novelties that were later to appear in other members of the Archosauria, most notably the Dinosauria.
Dominating Terrestrial Environments
Developing similar body plans in Triassic Archosaurs, comparable to those seen in later members of this extensive reptilian group, for example, the Dinosauria is not all that of a turn up for the books, when you consider it. Towards the end of the Triassic the Archosauriformes had established themselves as the dominant terrestrial vertebrates, a position that one specialised group of Archosaurs, the Dinosauria, were to take up and not relinquish for another 150 million years or so. A number of authors have challenged some of the conclusions from the paper entitled “A Dome-Headed Stem Archosaur Exemplifies Convergence among Dinosaurs and Their Distant Relatives”, nesting Triopticus primus within the basal Archosauriformes as in the paper, is not without its controversy. The skull is very different from other Archosaurs. It is only until the likes of the pachycephalosaurid Stegoceras appears in the Late Cretaceous, that Archosaurs with such expanded craniums that lack an upper temporal fenestra, that a skull shape like Triopticus is seen again.
Corresponding Author of the Scientific Paper Michelle Stocker and a Cast of the Triopticus Skull
Picture Credit: Virginia Tech College of Science
Although the exact taxonomic affinity of Triopticus is controversial, the Otis Chalk deposits may reveal more examples of convergent evolution. If Triopticus is classified as a member of the Archosaur group, then its fossils may demonstrate that some types of dinosaur evolved body plans very similar to their Triassic-aged relatives. If this is the case, then early evolution of body plans may have constrained later Archosaurs in the type of body plans that they could evolve.
Whatever the relationship to Archosaurs, Triopticus primus evolved a very thickened skull, quite what for remains a mystery.
It Looks Like Pachycephalosaurs were not the First “Bone Heads”
Picture Credit: Everything Dinosaur |
Today, astronomers have introduced it’s officially the most important collision ever detected, forming a black hole 150 instances extra massive than the solar. Two monster black holes met, danced and fell into one another. Their collision fashioned a black hole one hundred fifty instances extra huge than the sun.
From Dancing Black Holes To The Ghost Dogs Of The Amazon
But having the primary image will enable researchers to be taught more about these mysterious objects. They shall be eager to look out for ways during which the black hole departs from what’s expected in physics.
“Although they are comparatively easy objects, black holes elevate a number of the most complicated questions concerning the nature of house and time, and ultimately of our existence,” he said. “There are many ideas about the way to get around this — merging two stars collectively, embedding the black hole in a thick disc of material it could swallow, or primordial black holes created in the aftermath of the Big Bang,” Berry mentioned. “The thought I really like is a hierarchical merger where we have a black hole formed from the previous merger of two smaller black holes.” But a star that collapses should not be capable of produce a black hole between the range of 65 to 120 photo voltaic plenty, which is called the pair-instability mass gap. This is as a result of probably the most large stars are obliterated by the supernova that comes hand in hand with their collapse.
No-one really is aware of how the intense ring across the hole is created. Even more intriguing is the query of what occurs when an object falls into a black hole.
- Astronomers consider that supermassive black holes lie at the middle of virtually all large galaxies, even our personal Milky Way.
- Astronomers can detect them by watching for their effects on close by stars and fuel.
- Nothing, not even mild, can escape from inside the occasion horizon.
- The defining characteristic of a black hole is the looks of an occasion horizon—a boundary in spacetime through which matter and light can move only inward in direction of the mass of the black hole.
“This makes us confident about the interpretation of our observations, together with our estimation of the black hole’s mass.” This effectively creates a virtual telescope across the same size as the Earth itself. “We have seen what we thought was unseeable,” mentioned Sheperd Doeleman, director of the Event Horizon Telescope Collaboration. A black hole is a location in area that possesses a lot gravity, nothing can escape its pull, even mild. Learn extra about what black holes are and the most recent information.
The explosion was 570 billion times brighter than the sun and 20 instances brighter than all the celebrities within the Milky Way galaxy combined, based on a statement from The Ohio State University, which is main the research. Scientists are straining to outline the supernova’s energy.
Then, there may be this new intermediate black hole, which is in between the 2. It was shaped by two massive black holes that had been likely created by collapsing stars. Of the 2 black holes that merged, the heavier one was eighty five photo voltaic lots and the opposite black hole was about sixty six solar masses. (CNN)Astronomers have detected probably the most huge merging of two black holes but through the oldest and most distant gravitational waves to ever hit Earth. An worldwide staff of astronomers might have found the most important and brightest supernova ever.
No single telescope is highly effective sufficient to picture the black hole. So, within the biggest experiment of its kind, Prof Sheperd Doeleman of the Harvard-Smithsonian Centre for Astrophysics led a project to arrange a community of eight linked telescopes. Together, they form the Event Horizon Telescope and can be considered a planet-sized array of dishes. |
How does a cephalopod see the world? If you’re a cuttlefish in this experiment, you see it through a pair of 3D glasses, and your world is lit by videos of shrimp.
This was the unlikely experimental set-up of a research project in the United States that investigated just how the cuttlefish (Sepia officinalis) is able to snatch prey with its tentacles while its eyes swivel independently. The cuttlefish’s relatives, octopuses and squid, don’t have 3D vision, but researchers suspected that cuttlefish do.
Humans see in three dimensions thanks to a process called stereopsis, or depth perception: each eye, when looking at the same scene, perceives each object being in a slightly different position. Our brain triangulates, allowing us to accurately measure the objects’ distance from us. But what happens when your eyes look in different directions at the same time?
This is where the 3D-spectacled cuttlefish come in. Participants were dubbed Supersandy, Sylvester Stallone, Long Arms and Inky, and introduced to their underwater movie theatres. Like 3D cinema technology, two identical images of shrimp were projected so that each cuttlefish eye would perceive them in a slightly different position. Some of the shrimp would appear to be in front of the screen and others behind it. If the cuttlefish were using stereopsis, their eyes would combine the two images to give 3D information, and they would strike with their tentacles accordingly.
This is exactly what happened. When the cuttlefish ‘saw’ a shrimp projected up close, they reversed and shot their tentacles out in front of them. Then, when the shrimp appeared to be further away, the cuttlefish swam right into the wall of the tank in pursuit of it.
The cuttlefish were seeing with stereopsis, but they weren’t using the same neural circuitry to process the images as humans. For instance, they didn’t have perception problems when one of the two images was brighter than the other, as humans would, or when one eye looked in a different direction. This suggests that what the cuttlefish visualise is different from what humans perceive.
Another recent study that delved into the complex brains of cephalopods mapped the mind of the bigfin reef squid (Sepioteuthis lessoniana) with MRI, creating an atlas of its neural connections. The researchers, from the Queensland Brain Institute at the University of Queensland, identified new connections among the squid’s 500 million neurons, the majority of which were connected to vision or movement. Their brains, say the researchers, approach those of dogs in their complexity: squid can count, solve problems, recognise patterns, and camouflage themselves, despite being colour-blind. |
Hydronephrosis is a condition characterized by the dilation or stretching of the inside of the kidney where urine collects.
Antenatal (before birth) hydronephrosis (fluid enlargement of the kidney) is detected in the fetus by ultrasound studies performed as early as the first trimester of pregnancy. In most cases, the diagnosis of antenatal hydronephrosis does not significantly change prenatal care, but will require follow-up ultrasounds and possible surgery during infancy and childhood.
Common causes of hydronephrosis include:
Reflux is the most common abnormality causing hydronephrosis and occurs because the valve between the bladder and the ureter does not function appropriately. This abnormality enables urine to flow backwards from the bladder into the ureter, and kidney. Most children outgrow reflux, but require close monitoring and may benefit from low-dose antibiotics to prevent kidney infection. Approximately 25% of children will eventually require surgery to correct the reflux, either because they fail to outgrow the reflux, or because of breakthrough urinary tract infections.
Blockage of the kidney may occur in several locations. The most common site for obstruction to occur is the ureteropelvic junction (UPJ). This is the site where the kidney drains into the beginning of the ureter. A blockage may also occur where the ureter drains into the bladder, called the ureterovesical junction (UVJ).
Posterior urethral valves (PUV)
In boys, a blockage may occur in the urethra, where the urine drains from the bladder out of the body. This is called a posterior urethral valve (PUV). True obstructions usually require surgical correction.
Multicystic Dysplastic Kidney
A multicystic dysplastic kidney (MCDK) is a kidney that has no function because it is made up of multiple fluid-filled cysts. Most of the time, the cysts shrink and eventually disappear. Occasionally, the cysts are very large, requiring surgery to remove the kidney.
Management During Pregnancy
In nearly all cases of antenatal hydronephrosis, additional ultrasound exams are the only special treatment necessary during the pregnancy. Rarely, a fetus is found to have severe obstruction of both kidneys and an abnormally low amount of amniotic fluid. Early intervention is sometimes recommended for these babies. For most cases of antenatal hydronephrosis, the pregnancy is not affected, and delivery is performed normally.
Management After Birth
After the baby is born, an initial ultrasound is usually performed at the hospital. Sometimes, depending on the severity of the hydronephrosis, we suggest the baby be started on a low-dose antibiotic until we are able to complete the required testing. If the hydronephrosis is still present on the postnatal ultrasound, we first check to see if the baby has reflux. This is done with a voiding cystourethrogram (VCUG), which requires a catheter inserted into the bladder and x-ray pictures. Infants with reflux are managed with routine ultrasounds and possibly low-dose antibiotics.
In babies without reflux who continue to show significant hydronephrosis after birth, it may be necessary to assess for a possible obstruction. This is done with a diuresis renal scan, which requires an IV, and sometimes a urinary catheter. Most blockages require surgical correction. If the blockage is mild or partial, the baby may be followed closely with ultrasounds, and sometimes additional renal scans. Over time, the hydronephrosis will either improve and be followed with continued observation, or worsen, which may require surgery.
Some babies will have hydronephrosis without reflux or obstruction. These children are usually followed with periodic ultrasounds to monitor the hydronephrosis and make sure the kidneys are growing appropriately, and that obstruction does not develop as they grow.
At Beaumont, every child receives care tailored to his or her condition and symptoms. No matter what treatment(s) your child needs, we will keep you informed at all times. You are an important part of your child’s treatment team, and we value your input. |
Agricultural enterprises in Latin America, Asia, and Africa use animal feeds called silage to sustain their agricultural productions’ productivity. This feed is made from the byproducts resulting from the digestion of a variety of animal feeds. Amongst different animal feeds, maize is the most commonly used component in these feeds. The animal wastes and the resulting silage are piled up in big piles, and the byproducts are stored for use in silos. These silos are constructed above-ground and can be buried underground. At any given point of time, there would be sufficient quantities of byproducts required for animal fodder.
In this modern-day and age, the primary function of these silos is to store vast amounts of wet or waxy wastes, resulting from animal feeds, until they are required elsewhere. Even though they have become functional and indispensable industrial facilities, they are a source of environmental pollution. As the animal waste and the hay produced in these silos are continually accumulating, there is a constant rise in the moisture content levels in the soil. The primary reason for this is the unavailability of phosphorous and potassium in the soil, due to the widespread drought conditions across the globe.
Bale wrap by Unipak, there are several methods of silage making use of the ingredients mentioned above. One of the most common forms of these systems is the pit method. This involves mixing two to three litres of fresh water in a one to a two-litre plastic container placed over an excavated pit or hole. The excavated hole is lined with earth to retain the pressure of the resultant earth weight. Inside this pit, the byproducts of the silage process are boiled, and this roasting process releases the moisture content in the soil. The resultant silage is drained off, and the pit is covered with soil, after which the same procedure is repeated till no more moisture is present in the soil.
High levels of silica are required for efficient aerobic composting of the plants’ roots. High levels of silica are also needed for the decomposition of cellulose. The processing process’s byproducts need to contain as much of the oxygen attracting microorganisms as possible to improve the products’ quality and quantity. These microorganisms need to consume and use up oxygen to digest and degrade the cellulose and lignite material that has already been mixed in the pit. Ideally, a four to six week period is more than enough time for the silage production plants to recover from the byproduct burn-out stage.
During the lactic acid synthesis process, the silage bed will be covered with a layer of lactic acid. The lactic acid bacteria will cause the silage mixture to gel together and harden into a solid mass. When this solid mass is allowed to settle out naturally, it will take up most of the available oxygen in the surrounding air. Therefore, the silage bed must be continually monitored to ensure the silage mixture is repeatedly recycled into a high-quality silage product. Once the bed has settled out ultimately, it will be time to finish the fermentation phase and let the silage cure to break down entirely before it is ready for sale.
Bale wrap by Unipak, while silage making is a relatively simple method of creating high-quality raw materials suitable for silage manufacture, it is always wise to follow acceptable packing practices when foraging for silage. Forage material that has not been adequately foraged will most likely not contain sufficient amounts of oxygen. The presence of adequate oxygen amounts will cause the silage to ferment and harden completely, creating an excellent sealing medium for the final processing step before the sale. This means that any foraged and not correctly dried silage will not have the proper ph concentration necessary to ensure maximum shelf life. An excellent way to determine if a forage material has foraged properly is to smell the forage. If the forage smells like coffee or tea, it has probably been appropriately processed and will be an excellent addition to any silage making a batch. |
Rain water sinks through the porous rocks but once it reaches the underlying clays it can sink no further. The water builds up along the junction between the rock layers and seeps out of the cliffs as a series of springs.
After periods of prolonged rainfall, the build up of water increases the weight of the cliff top. Increased poor pressure reduces the friction and allows large sections of the cliff top to break away.
As the cliff top block subsides, it rotates along the slip plane within the cliff, resulting in the flat surface tipping back towards the cliff.
The main type of landslide that occurs at the coast is called slumping .
Slumping (also landslips) Slumps occur in weaker rocks and involve some rotational movement. Slumping can occur after heavy rainfall or earthquakes Back tilted slopes Large blocks break away Sliding surface is concave Soft boulder clay cliffs can be undercut by the sea and slumps are common. |
Scientists have discovered a way of converting skin cells directly into neural precursor cells - the cells that develop into the three main components of the brain and nervous system.
Researchers from Stanford University School of Medicine are celebrating a new medical breakthrough - a process similar to using stem cells to create new cells in the body. Except this time, scientists have transformed skin cells directly into neural precursor brain cells, skipping the ‘middle stem cell’ stage in the process.
The American research team infected the skin cells of mice with a virus in order to mimic neural precursor cells. Scientists were able to successfully take these skin cells and reprogramme them directly into brain cells without passing through the stem cell stage.
It was found that in just three weeks, one in ten of the skin cells had started to look and act like working neural precursors.
To confirm this result, researchers injected the new neural cells into the brains of newborn mice, which were bred to lack ability to make the myelin sheath, an insulating layer that surrounds the nerve fibres and allows impulses to transmit into the nerve cells.
After ten weeks, these new cells had transformed into oligodendrocytes, a neural precursor brain cell that enables the nerve fibres to connect neurons together and allow signals to be transmitted.
"We've shown the cells can integrate into a mouse brain and produce a missing protein important for the conduction of electrical signal by the neurons. This is important because the mouse model we used mimics that of a human genetic brain disease," says the author of the study, Dr Marius Wernig.
"We are thrilled about the prospects for potential medical use of these cells."
These findings follow an earlier study that transformed mouse and human skin cells into functional neurons.
Scientists are hoping that this breakthrough could pave the way for tests using embryonic skin cells from humans as it can provide huge promise to treating a range of conditions, such as strokes and blindness to name a few.
However, this is still a long way off because of the ethical concerns around the human embryonic stem cell system.
Suggested For You
Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements. Learn more |
A human body has more than 10 to the power of 27 molecules with about one hundred thousand different shapes and functions. Interactions between molecules determine our structure and keep us alive. Researchers at the Max Planck Institute for Solid State Research in Stuttgart in collaboration with scientists from the Fraunhofer Institute in Freiburg and the King’s Collage London have followed the interaction of only two individual molecules to show the basic mechanism underlying recognition of dipeptides.
By means of scanning tunnelling microscopy movies and theoretical simulations they have shown how dynamic interactions induce the molecular fit needed for the transfer of structural information to higher levels of complexity. This dynamic picture illustrates how recognition works at the very first steps, tracking back the path in the evolution of complex matter. (Angewandte Chemie International April 20th 2007)
If one thinks that there are thousands of times more molecules forming our body than stars in the universe it is astonishing how all these molecules can work together in such an organised and efficient way. How can our muscles contract to make us walk? How can food be metabolised every day? How can we use specific drugs to relieve pain?
To work as a perfect machine, our body ultimately relies on the capability of each little part (molecule) to know a specific function and location out of countless possibilities. To do this, molecules carry information in different ways. An international team at the Max Planck Institute for Solid State Research in Stuttgart, in collaboration with scientists from the Fraunhofer Institute in Freiburg and the King's College London are seeking to find out how the information can be passed on at the very first steps: from the single molecule level to structures of increasing complexity and functionality.
The key to understanding all biological processes is recognition. Each molecule has a unique composition and shape that allows it to interact with other molecules. The interactions between molecules let us - as well as bacteria, animals, plants and other living systems - move, sense, reproduce and accomplish the processes that keep all living creatures alive.
A very common example of recognition can be experienced in daily life whenever one meets someone and shakes right hands. In principle, one can also shake left hands; the fact that we do it with the right has historically been a sign of peace, used to show that both people hold no weapon. But, have you ever attempt to shake the right hand of a person using your left hand? No matter how the two hands are oriented, you will never fit your left hand with the right hand of your friend.
Many molecules can recognise each other and transfer information exactly in the same way, they can either be "right handed" (D) or "left handed" (L). This property called "chirality" is a spectacular way to store information: a chiral molecule can recognise molecules that have the same chirality (same "handedness", L to L or D to D) and discriminate the ones of different chirality (L to D and D to L).
Probably one of the most exciting mysteries of Nature is why the building blocks of life, i.e. amino acids (the building blocks of proteins) are exclusively present in the chiral L form and sugars (which constitute DNA) are all in the D form. Once more, the reason for this preference is "historical", but this time goes back millions of years till the origins of the biological world. Scientists believe that current life forms could not exist without the uniform chirality ("homochirality") of these blocks, because biological processes need the efficiency in recognition achieved with homochiral substances. In other words, the separation of molecules by chirality was the crucial process during the Archean Era when life first emerged.
Researchers of the Max Planck Institute for Solid State Research have now used the "nanoscopic eye" of a scanning tunnelling microscope to make movies following how two adsorbed molecules (diphenylalanine, the core recognition motif of Alzheimer amyloid polypeptide) of the same chirality can form structures (pairs, chains) while molecules of different chirality discriminate and cannot form stable structures.
As it occurs when you shake the hand of your friend, the fact that the two homochiral hands are complementary by shape is not enough, you both have to dynamically adapt and adjust your hands to reach a better fit, a comfortable situation. By a combination with theoretical simulations done at Kings College London, the researchers have shown for the first time this dynamic mechanism of how two molecules "shake hands" and recognise each other by mutually induced conformational changes at the single molecule level.
We live in houses, wear clothes and read books made of chiral cellulose. Most of the molecules that mediate the processes of life like hormones, antibodies and receptors are chiral. Fifty of the top hundred best-selling drugs worldwide are chiral. With this contribution to the basic mechanism of chiral recognition, the researchers have not only tracked back to the very first steps in the evolution of living matter but have also shed light on our understanding and control of synthetic (man-made) materials of increasing complexity.
Related link: Molecular handshake (film) -- www.fkf.mpg.de/kern/videos/videoV1.mpg
Explore further: Image or mirror image? Chiral recognition by femtosecond laser |
350 search results
Using Logical Reasoning to Prove Conjectures about Circles
Given conjectures about circles, the student will use deductive reasoning and counterexamples to prove or disprove the conjectures.
Generalizing Geometric Properties of Ratios in Similar Figures
Students will investigate patterns to make conjectures about geometric relationships and apply the definition of similarity, in terms of a dilation, to identify similar figures and their proportional sides and congruent corresponding angles.
Determining Area: Sectors of Circles
Students will use proportional reasoning to develop formulas to determine the area of sectors of circles. Students will then solve problems involving the area of sectors of circles.
Interactive Math Glossary
Making Conjectures About Circles and Segments
Given examples of circles and the lines that intersect them, the student will use explorations and concrete models to formulate and test conjectures about the properties and relationships among the resulting segments.
Determining Area: Regular Polygons and Circles
The student will apply the formula for the area of regular polygons to solve problems.
Making Conjectures About Circles and Angles
Given examples of circles and the lines that intersect them, the student will use explorations and concrete models to formulate and test conjectures about the properties of and relationships among the resulting angles.
Domain and Range: Numerical Representations
Given a function in the form of a table, mapping diagram, and/or set of ordered pairs, the student will identify the domain and range using set notation, interval notation, or a verbal description as appropriate.
Solving Problems With Similar Figures
Given problem situations involving similar figures, the student will use ratios to solve the problems.
Transformations of Square Root and Rational Functions
Given a square root function or a rational function, the student will determine the effect on the graph when f(x) is replaced by af(x), f(x) + d, f(bx), and f(x - c) for specific positive and negative values.
Transformations of Exponential and Logarithmic Functions
Given an exponential or logarithmic function, the student will describe the effects of parameter changes.
Solving Square Root Equations Using Tables and Graphs
Given a square root equation, the student will solve the equation using tables or graphs - connecting the two methods of solution.
Functions and their Inverses
Given a functional relationship in a variety of representations (table, graph, mapping diagram, equation, or verbal form), the student will determine the inverse of the function.
Rational Functions: Predicting the Effects of Parameter Changes
Given parameter changes for rational functions, students will be able to predict the resulting changes on important attributes of the function, including domain and range and asymptotic behavior.
TXRCFP: Texas Response to Curriculum Focal Points for K-8 Mathematics Revised 2013
The Texas Response to Curriculum Focal Points Revised 2013 was created from the 2012 revision of the TEKS as a guide for implementation of effective mathematics instruction by identifying critical areas of content at each grade level.
Vertical Alignment Charts for Revised Mathematics TEKS
This resource provides vertical alignment charts for the revised mathematics TEKS.
Sunflower Biscuit Bones (PDF) | Martha Speaks
The PDF of the interactive, informational story "Sunflower Biscuit Bones" designed for in-classroom use.
In these segments, artist Mark Ecko discusses what motivates him to create his art, and what "creativity" means to him. This resource teaches students what "motivation" and "creativity" mean, and empowers students to create their own art.
Nature Cat | The Treasure of Bad Dog Bart
While digging a hole to bury his bone, Hal uncovers Bad Dog Bart's treasure map. Legend has it that Bag Dog Bart stole the neighborhood dogs' toys, and buried them in a treasure chest for himself!
Activity: I Wonder | Daniel Tiger's Neighborhood
This activity from The Fred Rogers' Company helps develop curiosity and confidence in asking questions. Children will finish the sentence "I wonder . . . " in order to create a dialogue about how the world works. |
The Great Lakes and the surrounding land provide many resources for the people who live in the area. Water for drinking and industry, fish for food, minerals, and other resources are abundant. However, people change the landscape. They create wastes and add chemicals to the environment when they use resources, and these can be harmful. When many people are concentrated in one area, they may compete for resources. In addition, the wastes these people generate tend to concentrate in the area immediately around them and may cause pollution problems.
When students have completed this activity, they will be able to:
- Compare the relative sizes of the five Great Lakes and their human populations.
- Describe some of the problems that arise when many people depend on a limited resource. |
You are here
The formation of the highly weathered Southern Appalachian mountain range began with mountain building between about 1 billion and 265 million years ago. Uplift renewed about 65 million years ago, and the modern landscape began to form. The range now consists of bedrock formed from the re-crystallization of sedimentary, volcanic, and igneous material.
In the Blue Ridge Province (shown in blue in Western North Carolina), erosion of uplifted mountains of resistant metamorphic and igneous rocks has produced a rugged landscape. These exposed rocks are some of the oldest rocks in the nation (i.e. from Cambrian or Paleozoic Eras).
The overall southwest to northeast trend of the landscape results from a similar pattern in the underlying rocks and structures imparted during the early mountain building events. The Appalachians formed in a similar manner as the current day Himalayan mountain range are forming. Here in the Appalachians, the old African plate collided into the North American plate and a large mountain range formed that ran from the Southern Appalachians to the Northern Appalachians and continued to what is now the Highlands of Scotland. The mountains were much higher in elevation hundreds of millions of years ago, as erosion has continued to wear away the rock over the years.
Geologic faults and fracture zones are shown as black lines on the map at right. A geologic fault is a crack in the earth’s surface, across which there has been significant offset or displacement. This means that—on the ground—faults represent zones where rock types and ages are very different on one side of the fault versus the other. A prime example of a fault is the long, linear topography along the Brevard fault zone in North Carolina and South Carolina. The Brevard Fault Zone—one of the longest and oldest faults in the country—is coincident with the Eastern Continental Divide in many areas.
Streams and rivers cut down through the mountains along less resistant rock types and along faults and fracture zones. Many streams that flow northwest or southeast follow trends of post-Paleozoic fracture zones in the bedrock that favor preferential downcutting and erosion by water. As streams and rivers influence the landscape, rock types—in turn—can influence streams and rivers by impacting the chemistry and stream flow. Limestone and dolomite, for example, can lead to the formation of Karst topography—or underground drainage systems—where the landscape is formed by the dissolution of soluble rock. Additionally, some rock types such as coal deposits and other related bedrock can affect water quality by fostering certain land uses over others.
On the western edge of the Blue Ridge Province, there is a major geologic fault zone called a thrust fault. A thrust fault is where old rocks have been pushed up over younger rocks during compressive mountain building events. The color change on the map from blue on the east to purple on the west indicates the older blue rocks being thrust over much younger Mississippian rock.
Within the Ridge and Valley, there are numerous fault zones (shown as black lines) which are thrust faults that set up smaller mountain ridges. As erosion has continued over the past 100 million years, the harder rocks have eroded less than the softer ones, so the fault zones show up as steeper mountain fronts. These are north-northeasterly trending folds causing valleys and ridges.
Other important areas include northern Virginia and eastern West Virginia, as well as a younger group of rocks located to the west of the much older Appalachian Mountain Front. This is a sedimentary rock dominated geologic basin. Sedimentary rocks are rocks that include sandstones and shales, and are composed of old, eroded rock that has been deposited and hardened by pressure and depth of burial. Geologic basins are areas where this type of rock is deposited. As explained in the Energy Resources section, this basin is the location of hydrocarbon deposits, rocks and associated fluids that are formed in sedimentary basins and holds most of the coal fields in the area.
One other map that should be explored is the landslide hazard map. There is a strong relationship between landslide hazards and the dominant rock types show on this map. |
Compilation and execution of a Java program is two step process. During compilation phase Java compiler compiles the source code and generates bytecode. This intermediate bytecode is saved in form of a
.class file. In second phase, Java virtual machine (JVM) also called Java interpreter takes the
.class as input and generates output by executing the bytecode. Java is an object oriented programming language; therefore, a program in Java is made of one or more classes. No matter how trivial a Java program is, it must be written in form of a class.
To demonstrate compilation and execution of a Java program we create a simple
HelloWorld program. We also skip the JDK installation process just to concentrate on compiling and running our
HelloWorld program which we are developing in following piece of code. While writing
HelloWorld program we must keep in mind that the file name and the name of the class that contains
main method must be identical. Second, a file can contain only one
public class at a time; therefore, if a file contains more than one class, the only class can be declared
public at a time. By keeping above rules in mind we create a Java program
HelloWorld.java inserting the following piece of code into a plain text file.
Once the Java program is written and saved, first, it has to be compiled. To compile a Java program from command line we need to invoke the Java compiler by supplying
javac command. Java compiler comes with JDK (Java Development Kit). JDK is a bundle of software needed for developing Java applications. It includes the JRE (Java Runtime Environment), set of API classes, Java compiler, Webstart and additional files needed to write Java applets and applications.
HelloWorld.java as follows:
[root@host ~]# javac HelloWorld.java
javac compiler creates a file called
HelloWorld.class that contains the bytecode version of the program. As said earlier, the Java bytecode is the intermediate representation of
HelloWorld.java program that contains instructions the Java interpreter will execute. The Java compiler doesn't execute the Java program - that is the job of the Java virtual machine. However, the Java virtual machine cannot execute
.java files directly. The compiler's job is to translate Java source files into "class files" that the virtual machine can execute.
Note that the Java compiler (javac) also facilitates to compile multiple
.java files together. If there are more than one Java source files in the same directory, you can either list the file names separated by spaces, or use the wildcard characters, for example,
[root@host ~]# javac HelloWorld.java one.java two.java
[root@host ~]# javac *.java
After successful compilation of
HelloWorld.class to actually run the program, we use the Java interpreter, called
java. To do so, pass the class name
HelloWorld as a command-line argument, as shown follows:
[root@host ~]# java HelloWorld
Hello World! will be printed on the screen as a result of the above command.
It is important to note that in above command we have omitted the
.class suffix of the byte-code file name (that is
HelloWorld.class in our case). The
java command invokes the Java Virtual Machine (will be written JVM hereafter). JVM then loads the specified class mentioned at command line and invokes the method
main of this class and start executing it, passing it a single argument that is an array of strings. The array of strings passed to
main is to receive command line arguments. The JVM generally takes the following steps in order to run a Java program.
Loading refers to the process of finding the binary form of a class or interface type with a particular name computed from source code by a Java compiler. This is simply the bytecode.
In making initial effort to execute the
main method of
HelloWorld class JVM sees that the class
HelloWorld is not loaded, means to say that JVM currently does not contain a binary representation of this class . The JVM then uses a class loader to load the class in memory, If the class file is not found at the place then an error is thrown.
Linking is the process of taking a binary form of a class or interface type and combining it into the run-time state of the Java virtual machine, so that it can be executed. A class or interface type is always loaded before it is linked.
HelloWorld is loaded, it must be initialized before
main is invoked. And,
HelloWorld must be linked before it is initialized. Linking involves verification, preparation, and resolution.
Verification ensures that the binary representation of a class or interface is structurally correct. For example, it checks that every instruction has a valid operation code; that every branch instruction branches to the start of some other instruction, rather than into the middle of an instruction; that every method is provided with a structurally correct signature; and that every instruction obeys the type discipline of the Java virtual machine language.
If an error occurs during verification, then an instance of the class VerifyError which is a subclass of class LinkageError will be thrown.
Preparation involves creating the static fields (class variables and constants) for a class or interface and initializing such fields to the default values. This does not require the execution of any source code; explicit initializers for static fields are executed as part of initialization, not preparation. Implementations of the Java virtual machine may precompute additional data structures at preparation time in order to make later operations on a class or interface more efficient. One particularly useful data structure is a "method table" or other data structure that allows any method to be invoked on instances of a class without requiring a search of superclasses at invocation time.
Resolution is the process of checking symbolic references from
HelloWorld to other classes and interfaces, by loading the other classes and interfaces that are mentioned and checking that the references are correct.
The resolution step is optional at the time of initial linkage. An implementation may resolve symbolic references from a class or interface that is being linked very early, even to the point of resolving all symbolic references from the classes and interfaces that are further referenced recursively. (This resolution may result in errors from these further loading and linking steps.) This implementation choice represents one extreme and is similar to the kind of "static" linkage that has been done for many years in simple implementations of the C language.
An implementation may instead choose to resolve a symbolic reference only when it is actively used; consistent use of this strategy for all symbolic references would represent the "laziest" form of resolution.
In this case, if
HelloWorld had several symbolic references to another class, then the references might be resolved one at a time, as they are used, or perhaps not at all, if these references were never used during execution of the program.
The execution of method
main of class
HelloWorld is permitted only if the class has been initialized.
Initialization consists of execution of any class variable initializers and static initializers of the class
HelloWorld, in textual order. But before
HelloWorld can be initialized, its direct superclass must be initialized, as well as the direct superclass of its direct superclass, and so on, recursively. In the simplest case,
Object as its implicit direct superclass; if class Object has not yet been initialized, then it must be initialized before
HelloWorld is initialized. Class
Object has no superclass, so the recursion terminates here.
HelloWorld has another class
SuperHello as its superclass, then
SuperHello must be initialized before
HelloWorld. This requires loading, verifying, and preparing
SuperHello if this has not already been done and, depending on the implementation, may also involve resolving the symbolic references from SuperHello and so on, recursively. Initialization may thus cause loading, linking, and initialization errors, including such errors involving other types.
After completion of the initialization for class
HelloWorld (during which other consequential loading, linking, and initializing may have occurred) the method
HelloWorld is invoked.
The method main must be declared
void. It must accept a single argument that is an array of strings.
Finally, A program terminates all its activity and exits when one of two things happens:
I. All the threads that are not daemon threads terminate.
II. Some thread invokes the exit method of class Runtime or class System and the exit operation is not forbidden.
Java Virtual Machine is a program that runs pre compiled Java programs, which mean JVM executes
.class files (byte-code) and produces output. The JVM is written for each platform supported by Java included in the Java Runtime Environment (JRE). The Oracle JVM is written in the C programming language. There are many JVM implementations developed by different organizations. They may somewhat differ in performance, reliability, speed and so. They can too differ in implementation specially in those features where Java specification does not mention implementation details of the feature. Garbage collection is the nice example which is left on vendor's choice and Java specification does not provide any implementation details.
JRE is an implementation of the JVM which actually executes Java programs. It includes the JVM, core libraries and other additional components to run applications and applets written in Java. Java Runtime Environment is a must install on machine in order to execute pre compiled Java Programs. JRE is smaller than the JDK so it needs less disk space and it is so because JRE does not contain Java compiler and other software tools needed to develop Java programs.
Java Development Kit is needed for developing Java applications. It is a bundle of software that is used to develop Java based applications. It includes the JRE, set of API classes, Java compiler, Webstart and additional files needed to write Java applets and applications.
Conclusively, to compile and run Java program you would need JDK installed, while to run a pre compiled Java class file (byte-code) you would need JRE. JRE contains
java Java interpreter but not
javac the Java compiler.
In this tutorial we explained how Java program is compiled and executed from command prompt. Hope you have enjoyed reading this tutorial, please do write us if you have any suggestion/comment or come across any error on this page. Thanks for reading!
Rory visits a beekeeper in Manchester who is gluing wireless chips to his bees.The long wait for a Persian iPhone keyboardPosted on Friday September 22, 2017
The new Apple's iOS has Persian keyboard. BBC Persian's Sam Farzaneh discusses why it is an important feature.Uber London loses licence to operatePosted on Friday September 22, 2017
Ride-hailing app Uber is "not fit and proper" to operate in London, the transport regulator says.Courtesy BBC News |
Bethlehem, Pennsylvania: A Moravian Settlement in Colonial America (59)
Learners explore why Moravians immigrated to the New World and how the towns they established embodied their religious beliefs.
5th - 12th Social Studies & History 3 Views 2 Downloads
Colonial America and The American Revolution
How did the founding of the American colonies lead to a revolution? Use the essential question and sample activities to guide learners through a series of history lessons. Additionally, the packet includes effective strategies to...
6th - 8th Social Studies & History CCSS: Adaptable
The American Revolution Introduction
Young historians create a replica of Jamestown Virginia, analyze the Mayflower Compact, practice penmanship by copying a famous quote by William Bradford, and read a great informational text on the beginnings of colonial America with a...
4th - 8th Social Studies & History CCSS: Adaptable
Culture Regions of the U.S.
Young scholars identify the location of different cultural groups within the United States (agricultural, retirement, urban, etc.) They map these areas and analyze the correlation between the landscape of a given region and the type of...
9th - 12th Social Studies & History
Interpreting Foundation Documents of the American Republic
Explore early American documents that qualify as primary sources. Tenth and eleventh graders use the provided worksheets to analyze the texts of the Articles of Association, the Declaration of Independence, the Articles of Confederation,...
10th - 11th Social Studies & History CCSS: Adaptable
We Have a Story to Tell: Native Peoples of the Chesapeake Region
How did colonial settlement and the establishment of the United States affect Native Americans in the Chesapeake region? Your young historians will analyze contemporary and historical maps, read informational texts, and work in groups to...
9th - 12th Social Studies & History CCSS: Adaptable
New Review Plymouth Colony
Read about the tumultuous beginning to the United States with an informational text passage about Colonial America. As young researchers peruse an article about the arrival of the Mayflower, the settlers' relationship to the neighboring...
11th - 12th English Language Arts CCSS: Designed
The 13 Originals: Exploring the Who, When, Where, and Why Behind the 13 Original Colonies of Early America
Discover the stories behind each of the thirteen stripes on the American flag with this straightforward presentation. Complete with learning objectives, discussion questions, and solid information about each of the original thirteen...
8th - 11th Social Studies & History
English Settlements: Religious Diversity
There is so much to know about early British colonists in the Americas. This resource describes the religious diversity and acts found among the first colonies in New England. Starting with an in-depth discussion on the Mayflower...
10th - 12th Social Studies & History
Immigrants' Experiences in Colonial North Carolina
What would it have been like to leave your birthplace and move to a brand new colony? Why might you have decided to move? What challenges would you have faced? As part of a series of lessons about the history of North Carolina, middle...
6th - 8th English Language Arts CCSS: Designed |
Scoliosis is a spinal condition which can affect babies, children, adolescents and adults. It causes the spine to curve into an S shape, which can sometimes make the body look uneven or cause back pain. While some babies are born with the condition, scoliosis can start to appear at any age and usually presents between 10-15 years of age. It is thought that three or four children out of every 1,000 in the UK have the condition and require treatment. Read more
Scoliosis is a spinal condition which causes the spine to twist and have an abnormal curvature. In many cases it is not serious and may not require treatment, but scoliosis can also cause back pain in adults, and the curve could possibly get worse over time.
The causes are not fully understood, and the majority of cases are defined as idiopathic when the cause cannot be identified or prevented. Sometimes it runs in families or could be caused by a genetic condition. Other types of less common forms include:
- Neuromuscular scoliosis – caused by a nerve or muscle condition such as cerebral palsy
- Congenial scoliosis – when the bones in the spine don’t form properly in the womb
- Degenerative scoliosis – wear and tear of the spine which occurs in old age
Although scoliosis affects people of all ages, it is most common in children aged 11-15. The condition can be present at birth, or can develop as the spine grows, and sometimes children need specific treatment to stop the curve getting worse until they stop growing. However, most people can live normal lives and the condition doesn’t affect physical activity except in extreme cases. There are no other health problems associated with scoliosis and usually it doesn’t cause recurring pain.
Symptoms of scoliosis
How do you know if you have scoliosis? If you think your spine is curved in an ‘S’ or ‘C’ shape then you may have the condition. Here are some other signs:
- Uneven shoulders
- Visible curves in the spine
- Leaning to one side
- Uneven hips
- Ribs sticking out on one side
- One shoulder or one hip sticking out
If you are experiencing back pain along with any of these signs, you should see a GP who will be able to diagnose scoliosis. An X-ray scan will be carried out so doctors can view the spine, and if it has an abnormal curve they can see how severe the curve is. A chiropractor can also refer you for a scan if they suspect scoliosis. If diagnosed you can see a specialist who can discuss treatment options available if needed.
Treatment depends on how severe the curve is and if it is likely to get worse. Adults may require pain relief in the form of spinal injections or even surgery. Toddlers, children and teenagers may be given a back brace to wear to control the growth of the spine.
For more information, visit http://www.sauk.org.uk/ |
Chapter 7 Connections
Connections are used in R in the sense of Chambers (1998) and Ripley (2001), a set of functions to replace the use of file names by a flexible interface to file-like objects.
7.1 Types of connections
The most familiar type of connection will be a file, and file connections are created by function
file. File connections can (if the OS will allow it for the particular file) be opened for reading or writing or appending, in text or binary mode. In fact, files can be opened for both reading and writing, and R keeps a separate file position for reading and writing.
Note that by default a connection is not opened when it is created. The rule is that a function using a connection should open a connection (needed) if the connection is not already open, and close a connection after use if it opened it. In brief, leave the connection in the state you found it in. There are generic functions
close with methods to explicitly open and close connections.
Files compressed via the algorithm used by
gzip can be used as connections created by the function
gzfile, whereas files compressed by
bzip2 can be used via
Unix programmers are used to dealing with special files
stderr. These exist as terminal connections in R. They may be normal files, but they might also refer to input from and output to a GUI console. (Even with the standard Unix R interface,
stdin refers to the lines submitted from
readline rather than a file.)
The three terminal connections are always open, and cannot be opened or closed.
stderr are conventionally used for normal output and error messages respectively. They may normally go to the same place, but whereas normal output can be re-directed by a call to
sink, error output is sent to
stderr unless re-directed by
sink, type=“message”). Note carefully the language used here: the connections cannot be re-directed, but output can be sent to other connections.
Text connections are another source of input. They allow R character vectors to be read as if the lines were being read from a text file. A text connection is created and opened by a call to
textConnection, which copies the current contents of the character vector to an internal buffer at the time of creation.
Text connections can also be used to capture R output to a character vector.
textConnection can be asked to create a new character object or append to an existing one, in both cases in the user’s workspace. The connection is opened by the call to
textConnection, and at all times the complete lines output to the connection are available in the R object. Closing the connection writes any remaining output to a final element of the character vector.
Pipes are a special form of file that connects to another process, and pipe connections are created by the function
pipe. Opening a pipe connection for writing (it makes no sense to append to a pipe) runs an OS command, and connects its standard input to whatever R then writes to that connection. Conversely, opening a pipe connection for input runs an OS command and makes its standard output available for R input from that connection.
Sockets can also be used as connections via function
socketConnection on platforms which support Berkeley-like sockets (most Unix systems, Linux and Windows). Sockets can be written to or read from, and both client and server sockets can be used.
7.2 Output to connections
We have described functions
sink as writing to a file, possibly appending to a file if argument
append = TRUE, and this is what they did prior to R version 1.2.0.
The current behaviour is equivalent, but what actually happens is that when the
file argument is a character string, a file connection is opened (for writing or appending) and closed again at the end of the function call. If we want to repeatedly write to the same file, it is more efficient to explicitly declare and open the connection, and pass the connection object to each call to an output function. This also makes it possible to write to pipes, which was implemented earlier in a limited way via the syntax
file = “|cmd” (which can still be used).
There is a function
writeLines to write complete text lines to a connection.
Some simple examples are
zz <- file("ex.data", "w") # open an output file connection cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = zz, sep = "\n") cat("One more line\n", file = zz) close(zz) ## convert decimal point to comma in output, using a pipe (Unix) ## both R strings and (probably) the shell need \ doubled zz <- pipe(paste("sed s/\\\\./,/ >", "outfile"), "w") cat(format(round(rnorm(100), 4)), sep = "\n", file = zz) close(zz) ## now look at the output file: file.show("outfile", delete.file = TRUE) ## capture R output: use examples from help(lm) zz <- textConnection("ex.lm.out", "w") sink(zz) example(lm, prompt.echo = "> ") sink() close(zz) ## now ‘ex.lm.out’ contains the output for futher processing. ## Look at it by, e.g., cat(ex.lm.out, sep = "\n")
7.3 Input from connections
The basic functions to read from connections are
readLines. These take a character string argument and open a file connection for the duration of the function call, but explicitly opening a file connection allows a file to be read sequentially in different formats.
Other functions that call
scan can also make use of connections, in particular
Some simple examples are
## read in file created in last examples readLines("ex.data") unlink("ex.data") ## read listing of current directory (Unix) readLines(pipe("ls -1")) # remove trailing commas from an input file. # Suppose we are given a file ‘data’ containing 450, 390, 467, 654, 30, 542, 334, 432, 421, 357, 497, 493, 550, 549, 467, 575, 578, 342, 446, 547, 534, 495, 979, 479 # Then read this by scan(pipe("sed -e s/,$// data"), sep=",")
For convenience, if the
file argument specifies a FTP, HTTP or HTTPS URL, the URL is opened for reading via
url. Specifying files via ‘file://foo.bar’ is also allowed.
C programmers may be familiar with the
ungetc function to push back a character onto a text input stream. R connections have the same idea in a more powerful way, in that an (essentially) arbitrary number of lines of text can be pushed back onto a connection via a call to
Pushbacks operate as a stack, so a read request first uses each line from the most recently pushbacked text, then those from earlier pushbacks and finally reads from the connection itself. Once a pushbacked line is read completely, it is cleared. The number of pending lines pushed back can be found via a call to
A simple example will show the idea.
> zz <- textConnection(LETTERS) > readLines(zz, 2) "A" "B" > scan(zz, "", 4) Read 4 items "C" "D" "E" "F" > pushBack(c("aa", "bb"), zz) > scan(zz, "", 4) Read 4 items "aa" "bb" "G" "H" > close(zz)
Pushback is only available for connections opened for input in text mode.
7.4 Listing and manipulating connections
A summary of all the connections currently opened by the user can be found by
showConnections(), and a summary of all connections, including closed and terminal connections, by
showConnections(all = TRUE)
The generic function
seek can be used to read and (on some connections) reset the current position for reading or writing. Unfortunately it depends on OS facilities which may be unreliable (e.g. with text files under Windows). Function
isSeekable reports if
seek can change the position on the connection given by its argument.
truncate can be used to truncate a file opened for writing at its current position. It works only for
file connections, and is not implemented on all platforms.
7.5 Binary connections
writeBin read to and write from binary connections. A connection is opened in binary mode by appending
“b” to the mode specification, that is using mode
“rb” for reading, and mode
“ab” (where appropriate) for writing. The functions have arguments
readBin(con, what, n = 1, size = NA, endian = .Platform$endian) writeBin(object, con, size = NA, endian = .Platform$endian)
In each case
con is a connection which will be opened if necessary for the duration of the call, and if a character string is given it is assumed to specify a file name.
It is slightly simpler to describe writing, so we will do that first.
object should be an atomic vector object, that is a vector of mode
raw, without attributes. By default this is written to the file as a stream of bytes exactly as it is represented in memory.
readBin reads a stream of bytes from the file and interprets them as a vector of mode given by
what. This can be either an object of the appropriate mode (e.g.
what=integer()) or a character string describing the mode (one of the five given in the previous paragraph or
n specifies the maximum number of vector elements to read from the connection: if fewer are available a shorter vector will be returned. Argument
signed allows 1-byte and 2-byte integers to be read as signed (the default) or unsigned integers.
The remaining two arguments are used to write or read data for interchange with another program or another platform. By default binary data is transferred directly from memory to the connection or vice versa. This will not suffice if the data are to be transferred to a machine with a different architecture, but between almost all R platforms the only change needed is that of byte-order. Common PCs (‘ix86’-based and ‘x86_64’-based machines), Compaq Alpha and Vaxen are little-endian, whereas Sun Sparc, mc680x0 series, IBM R6000, SGI and most others are big-endian. (Network byte-order (as used by XDR, eXternal Data Representation) is big-endian.) To transfer to or from other programs we may need to do more, for example to read 16-bit integers or write single-precision real numbers. This can be done using the
size argument, which (usually) allows sizes 1, 2, 4, 8 for integers and logicals, and sizes 4, 8 and perhaps 12 or 16 for reals. Transferring at different sizes can lose precision, and should not be attempted for vectors containing
Character strings are read and written in C format, that is as a string of bytes terminated by a zero byte. Functions
writeChar provide greater flexibility.
7.5.1 Special values
writeBin will pass missing and special values, although this should not be attempted if a size change is involved.
The missing value for R logical and integer types is
INT_MIN, the smallest representable
int defined in the C header limits.h, normally corresponding to the bit pattern
The representation of the special values for R numeric and complex types is machine-dependent, and possibly also compiler-dependent. The simplest way to make use of them is to link an external application against the standalone
Rmath library which exports double constants
R_NegInf, and include the header Rmath.h which defines the macros
If that is not possible, on all current platforms IEC 60559 (aka IEEE 754) arithmetic is used, so standard C facilities can be used to test for or set
NaN values. On such platforms
NA is represented by the
NaN value with low-word
0x7a2 (1954 in decimal).
Character missing values are written as
NA, and there are no provision to recognize character values as missing (as this can be done by re-assigning them once read). |
Native American Lit/Culture
Standard letter grades
Contact hours total
Introduction to traditional oral and contemporary Native American texts with an emphasis on cultural contexts and continuity. Considers Native American works in their national, historical, cultural, geographical, political, and legal contexts.
1. Explain broad features of the history and experience of Native Americans in the United States and situate individual texts within that history.
2. Identify and analyze common and/or recurring themes in Native American literature and culture (such as generational conflict; the American Dream; cultural identity; the journey motif; accommodation and assimilation; conflict with dominant American culture and ethics; and/or media representations.)
3. Analyze and interpret Native American texts representing a range of geographical origins, cultural traditions, and experiences.
4. Apply disciplinary knowledge specific to the Humanities (including textual analysis and close-reading practices) to the treatment of events, issues, and/or ideas and Identify how that knowledge differs from the approach of another academic discipline.
5. Synthesize multiple viewpoints and perspectives—including one’s own—in order to critically analyze values, ethics, and other relevant topics within a range of human experience and expression.
Native American history and culture*
• Pre contact
• First contact
• Tribal histories
• Trail of Tears
• Indian Boarding Schools
• Urban Indians
Oral traditions and myths*
• Trickster tales
• Animal tales
• Cultural hero tales
• Origin emergence stories
Literature by Native American writers*
• Short stories
• Non-fiction texts (essays, memoirs, scholarship)
* Multimodal works by Native peoples (music, dance, film)
This course may require one of more books or textbooks.
This course may be assessed through quizzes on readings and videos, informal and formal writing projects, short essays, midterm exam, and final exam; may include discussion, research project, and in-class presentation.
General education/Related instruction lists
- Arts and Letters
- Cultural Literacy |
In India, the debate around this issue dates back to the framing of the constitution itself. An interesting point with this regard is that in the Indian context the doctrine of separation of powers was never given the constitutional status meaning that it is nowhere explicitly stated in the constitution. However, the constitution was framed while keeping the doctrine of separation of powers in the mind.
Since ours is a parliamentary system of governance, though an effort has been made by the framers of the constitution to keep the organs of the government separated from each other, a lot of overlapping and combination of powers has been given to each organ.
The legislative and executive wings are closely connected with each other due to this, the executive is responsible to the legislature for its actions and derives its powers from the legistlature. The head of the executive is the president, but a closer look shows that he is only a nominal head and the real power rests with the Prime Minister and his Cabinet of ministers as in Article 74(1) . In certain situations, the President has the capacity to exercise judicial and legislative functions. For example, while issuing ordinances. The judiciary too performs administrative and legislative functions.
The parliament too may perform judicial functions, for example, if a president is to be impeached both houses of Parliament are to take an active participatory role. Thus all three organs act as a check and balance to each other and work in coordination and cooperation to make our parliamentary system of governance work. Thus, in India, the doctrine of separation of powers is not upheld in the strictest sense but rather, it is very flexible.
Judicial Activism and Judicial Review
It is important to note that the separation of powers is still an important guiding principle of the constitution. Most noteworthy is our judicial system which is completely independant from the executive and the legislature. As in regard to the judges, they are extremely well protected by the Constituition, their conduct is not open to discussion in the Parliament and their appointment can only be made by the President in consultation with the Chief Justice of India and the judges of the Supreme court.
The interpretational and observer role of the Judiciary over the Legislature is called Judicial Review. The judiciary is the final authority for the interpretation of constitution in India. The Judiciary can prevent it by declaring the act or action ultra-virus if the Legislature transgresses the powers given to it by the constitution. This power is called Judicial Review. This was held in the landmark case of Indira Gandhi v. Raj Narain.
While Judicial Activism is the concept how actively and quickly the judiciary performs the act of Judicial Review. the readiness that the courts have achieved in exercising its power to uphold the values of the constitution have generally come to the extent that JR has gradually acquired the form of Judicial Activism in India. Judicial Activism is the extent and the vigour and the readiness with which courts exercise their power of Judicial Review. So, there is a marked difference between both of them. Courts have actively performed an interventionist role and that we have witnessed the phenomenon of Judicial Activism. The courts have overthrown or at least liberalized the concept of locust stand to allow any public-spirited person or organisation to bring to the notice of the court any matter of injustice and violation of constitutional rights of any downtrodden and unprivileged classes of society.
The court has expanded the scope and amplitude of Article 21 to cover many basic rights under it so that giving them the status of fundamental rights, they can be enforced against the state also, even by PIL. Another factor which contributed to the Judicial Activism was the expansive judicial interpretation placed on the expression life in Article.
Bone of Contention
Through various Judgments, the SC has ruled that if any clause takes away the powers vested upon the Judiciary of using the Judicial Review power, it can be done on all levels except for the Supreme Court. Even there can be several laws that can take away the power of High Courts but any law cannot be justified which would encroach the Supreme Courts power to Judicial Review, and if any law does so then the Doctrine of Colorable Legislation would be enforced in such situation. And the law so made would become void as to the extent or wholly, by the virtue of Article 13 of Indian Constitution .
It is observed that when the Legislature tries to validate a law declared by a court to be illegal or invalid, the cause for it must be removed before such validation. Such validation can be said to take place effectively only if the inconsistent or derogatory part is removed . It is unjust to abridge or attempt to hinder the powers vested upon the Judiciary as part of the basic structure. Thereby any discrepancy caused due to the Legislature cannot be justified by inserting any clause that is illegal.
The debate on this issue evolved through many landmark cases. The Doctrine was cemented as part of the Basic structure of the constitution in the Kesavnanda Bharti Case . The main question was whether the parliament had unrestricted amending powers due to Article 368 of the constitution and how much could actually be amended. To this, the judgement given by the supreme court held that the amending power of the parliament was subject to the basic structure of the Constitution, and any amendments which tampered with the basic structure would be unconstitutional. In this judgement, the separation of powers doctrine was included in the basic structure of the constitution and thus any amendments which gave control of one organ over another would be unconstitutional, leaving the Executive, the Legislature and the Judiciary completely independent . It must be kept in mind though that in India the separation of powers doctrine is not followed extremely rigidly.
A recent case, Delhi Development Authority v M/s UEE Electricals Engg. Pvt. Ltd where the Supreme Court ruling has sought to clarify the meaning and objective of judicial review as a protection and not an instrument for undue interference in executive functions. The Supreme Court made the observation that “One can conveniently classify under three heads the grounds on which administrative action is subject to control by judicial review . The first ground is “ illegality ", the second “ irrationality " and the third “ procedural impropriety ".
Legistify connects you with the best lawyers in India and top Chartered Accountants in India with simple telephonic conversation or email. Call us at 011-33138123 or send us an email at firstname.lastname@example.org to get started. |
Say you wanted to put in a well in a small community that needed water, in an economically depressed part of the world. You did your homework on how a well could be built for sustainable, good quality water. You gathered the resources, knew the well would not dry up nearby springs because you understood the local hydrogeology, and had a pump design that could be supported by the community and repaired if broken in the future. You set up a system for the community to periodically test the water quality, ensuring it would remain safe. Sounds perfect, right?
But what if that well was placed in the backyard of someone in the community that everyone disliked? You may have just started a local water war that could last generations.
There are a multitude of ways that a well-meaning hydrophilanthropic person or group, with clear objectives and an eagerness to improve the human condition, can make things worse. Rather than alleviate suffering, imprudent actions can reduce the quality of community life, and individual health and safety. Plunging unawares in one of countless pitfalls is surprisingly easy, particularly for those lacking a holistic view and having little background in water projects.
Perspectives and Strategies for Getting It Right – Some Suggestions
Water can help both communities and ecosystems by propelling agriculture and economic growth, ensuring improvements in health, reducing work absenteeism, increasing opportunities for childhood education (which is sometimes gender-biased against girls who gather water), and combating the effects of drought and climate change.
As many readers know, statistics suggest that the water and sanitation crisis will expand. Estimates related to water scarcity indicate that approximately 3.4 million people, over half of which are children, die from lack of clean drinking water each year, and 748 million people do not have access to clean water. In addition to direct mortality from thirst and waterborne disease, mortality could be even greater because of cascading effects, such as death from malnutrition caused by water shortages to agriculture and herd animals. Sanitation statistics are even worse with the World Bank (2014) estimating 2.5 billion people don’t have access to improved sanitation and 1 billion practicing open defecation.
Here are a few suggestions for the hydrophilanthropist wanting to make a positive impact on communities and ecosystems:
- Take a long term view. Studies support the idea that fewer, sustainable water and sanitation developments are more beneficial than numerous, short-lived developments which can be neglected and fall into disrepair.
- Ensure follow up and sustainability. Make sure there are local resources, expertise, and educational programs for continued project viability. Consider regular sustainability audits.
- Plan a post-project monitoring program, with community-based stewardship. Try to anticipate multiple outcomes for your project, and think about future, mid-course corrections (create Plan B, and maybe C and D).
- Ensure that those who use water or sanitation facilities will be physically safe and secure. This may involve carefully planning the location of and access to facilities.
- Also when considering location, make sure the science and engineering are right. Get experts and/or facilitators involved. Wells that are too close to pit privies can cause disease. Wells that are hydrologically connected to springs or other wells could dry them up. A borehole in the wrong geologic media could have an adverse water quality that could poison people, domestic animals, and wildlife.
- Seek engineering designs for wells and other facilities that are the appropriate technology for the community. Sometimes low-tech is the right tech.
- Include community education for two reasons. First, so outside people wishing to help a village can be educated by the community to understand local needs, cultural wealth and values, existing local resources, economic goals, religious appreciations, and identify gaps in human and physical resources. Second, outside people can work with locals to bring in additional educational resources, help establish household education and action plans, and explore possible connections and collaborations with nearby schools and universities.
- Work with the local community to understand the social and moral norms. Research culture and traditions prior to your arrival, and ensure language translation accurately communicates ideas. It is not advantageous to impose pre-conceived values on a community. Pre-planning and pre-construction site visits can be key to establishing trust, understanding, and rapport.
- Be aware of the political landscape. Be cognizant of the impacts of any local policies and laws, note any corruption and unrest, appreciate positive community resources, and determine how hydrophilanthropic work might best fit in.
- Know the impacted population’s goals for economic development and specifically how a water or sanitation development can strategically help boost economic growth. Think about how your project fits into a livelihoods-based approach to community development.
- Seek experienced project leadership for hydrophilanthropic efforts. Water and sanitation efforts with strong leadership, mentoring, technical expertise, and continuing communication among practitioners and stakeholders typically have built-in resilience, underlying confidence, and the perspective provided by that experience.
Keeping some of these principles in mind can direct hydrophilanthropy in a positive way and increase the efficacy of water and sanitation projects.
More information can be found at: http://specialpapers.gsapubs.org/content/early/2016/03/07/2016.2520_19.abstract |
Sometimes, listening to my 5-year-old chatter away with her friends can be exhausting :) but I simply love how much they have to say and how imaginative they are when telling stories. Cooperative fine motor activities let them work together to build their own collaborative creative masterpieces.
WHAT YOU’LL NEED: White paper, colored pencils or crayons
WHAT TO DO: Start with a blank piece of paper and some colored pencils or crayons, and let the story begin! Instruct one child to draw a picture of an object on the paper. Give suggestions such as a person, flower, animal, sun, or house. The second child then makes one addition to the picture that expands on what the first child has drawn. The children continue to take turns back and forth, building on the picture and adding to the story until they have a finished product!
HOW TO CHANGE IT UP:
-Provide prompting, such as “what details could you add to the house?” or “what else do you see in your yard?”
-Once finished, the children can tell a story about their picture to a parent, friends, or classmates.
-Give the kids a starting point by drawing the first object on the page.
-Give the children a theme topic such as zoo, farm, or school.
SKILL AREAS ADDRESSED: cognitive skills, social skills, fine motor skills, visual motor integration, prewriting skills
Be sure to sign up to receive our newsletter, a weekly round-up of our favorite posts delivered right to your inbox! Simply enter your email address in the box in the sidebar and click “Subscribe”!
Looking for more great activities for kids?
Latest posts by Pam Braley (see all)
- 10 CLASSIC GAMES FOR THE BACKYARD - July 7, 2015
- GAMES FOR GROUPS: HAND CLAPPING GAMES FOR KIDS - May 19, 2015
- Activities for Kids: Movement Breaks to Help Kids Stay Alert and Focused - August 17, 2014 |
Have you played with magnets before?
You might remember trying to make things stick together or move an object just by using a magnet. A magnet is an object that attracts or repels other objects within a magnetic field. Magnets can be permanent or temporary and vary in size, shape or in the strength of its magnetic field. Some magnets are more powerful than others.
During olden times, people thought magnets were magical. Imagine being able to move an object without touching it! Children who discover magnets for the first time also get surprised by how magnets work. Magnets are easy to use and fun to play with, but are also used in nearly every utensil or tool that we have at home. At home, your mom doesn’t use glue or tape to stick notes to the refrigerator door; she uses little magnets to attach them. How cool is that?
How do magnets work?
What do magnets have that make objects stick to it? Magnets have an invisible field that forces other objects to react to its properties. This powerful force is called the magnetic field. Magnetic fields have particles called electrons that actively shift and move within the field. These electrons continuously revolve around the core of the magnet (its poles), creating energy that attracts objects. Because of this, magnets have the ability to draw objects towards itself. This ability is called magnetism, caused by the force field that magnets create through its electrons (negative charge) and protons (positive charge).
If two magnets are close together, try figuring out which ends tend to meet. If you look closely, you’ll see that unlike poles attract each other, while identical poles repel each other. If you place the south pole of a magnet beside the north pole of another magnet, they will stick together. On the other hand, putting two magnets with both north poles facing each other will force them apart.
Which objects are attracted to magnets?
Magnets can attract all things made of iron. Objects that are made out of other metals like nickel can also stick to magnets, although non-metallic objects like glass, cloth and paper cannot be attracted to magnets.
Do you know that you can also make temporary magnets out of everyday tools? Observe how dangling a permanent magnet on top of a bunch of iron nails automatically pulls the nails towards it. The invisible field exerts a pull on the nails to get attached to the magnet. Objects that are surrounded by a magnetic field can become magnetized for a time and be able to have other objects stick to it. If you attach a nail to a permanent magnet, the nail itself will be caught within the magnetic field and acts as a magnet itself, forcing other nails to attach to it. Sometimes, the magnetic field is too strong for it to disappear right away that an object can retain its magnetism long after it was magnetized. You can test this by running a permanent magnet over an iron bar or nail several times, then leave it be and see if it can attract other nails.
How do we use magnets in everyday life?
Magnets are everywhere! You may not see it or feel it, but nearly everything that works around you uses the magnetic field. When you close the refrigerator door, the way it sticks to the fridge is because of magnets. The microwave oven where you cook your popcorn, the electric fan you use to keep off the heat, even the computer – all use magnets to function!
Magnets are even present in devices we use to enjoy music: without magnets, you wouldn’t be able to use your earphones or speakers. Magnets are also useful in medical equipment and electronics. Nearly all appliances that use motor engines use magnets to make them work.
The Earth is actually a huge magnet. Can you believe it? Our planet has both North and South poles, which act within the Earth’s magnetic field. Similarly, all magnets have two poles: north and south. The magnetic fields are strongest at the poles, and the ends will point towards its poles. Try hanging a bar magnet in the air, and see how the north end of the magnet follows the direction towards the Earth’s North pole, and the south end of the magnet faces the South pole. This is why the compass that we bring and use during hiking uses a magnet to show us the way we need to go.
Now that we have learned about magnets and magnetism, we can be aware of how valuable it is in our daily lives. Science helps us recognize how the world truly works, so if you wish to understand more about science just go to www.sciencewithme.com and browse our site. |
According to Karani Nyamu, youth population is increasing explosively particularly in developing countries as a result of rapid urbanization. This increase is bringing large number of social and economic problems. Take Kenya for example, the impacts of job and training availability, and the physical, social and cultural quality of urban environment on young people are enormous, and affect their health, lifestyles, and well-being (Gleeson and Sipe 2006). Besides this, globalization and technological developments are affecting youth in urban areas in all parts of the world, both positively and negatively (Robertson 1995).
Karani Nyamu says that according to Idowu Michael, more often than ever before, technology has transformed the way younger generation communicate and access information. Two major assumptions underlie the role of ICT: the first is that the proliferation of these technologies is causing rapid transformations in all areas of life; the second is that ICT function to unify and standardize culture. It is on the basis of these assumptions that the term “information age and globalization” evolved.
Karani Nyamu identified these as major categories of both positive and negative impacts of ICT on youths:
According to an in-depth evaluation of the impact of ICT on youth published in the 2003 World Youth Report prepared by the United Nations, ICT has changed the way young people interact socially, as digital communication has increasingly replaced traditional forms of interaction. ICT offers youth autonomy from families with access to vast virtual social networks that provide more instantly-gratifying, but less personal interactions.
Some researches, including a Swedish study published in a 2007 issue of the Journal of Computers in Human Behaviour, highlights the potential negative impacts of ICT on youths. Such studies tend to conclude that a high quantity of ICT use has a risk factor of developing psychological health challenges among youths.
Education and Empowerment
ICT also offers opportunities for youth empowerment and education, particularly in societies where resources are limited. Researches has shown that the youths in various locations can use ICT to maintain cultures, gain knowledge, develop skills and generate income. According to the 2005 World Youth Report section on youth in civil society, “ICT is increasingly being used to improve access to education and employment opportunities, which supports efforts to eradicate poverty.
ICT has also helped greatly in the communication system of the drabbled. The disabled now have the opportunity to communicate through electronic communication boards and specialized computers software.
It has made the world a global village, making us know what happens around. It has drastically reduced time and distance.
ICT has helped to communicate en-masses i.e. it has aid mass communication through e-mails, e-news letters to a large audience.
It allows users to participate in a world in which work and other activities can be accessed to varied technologies.
ICT has led to unemployment since with ICT, people are exposed to what machines and faster than human beings and through his people lost job.
ICT has also brought about the eradication of air culture. The youth of nowadays, take in the case of dressing, greeting are now following the steps of white thereby warning their own culture aside. |
Plagiarism: How do I correctly cite my sources? Examples
|Plagiarism, defined by the Oxford English Dictionary, is "the action or practice of taking someone else's work, idea, [words, cartoon, graph, chart, PowerPoint] etc., and passing it off as one's own."
Bibliographies: A bibliography or list of references provided at the conclusion of your paper informs the reader about the materials you consulted for your research but does not sufficiently acknowledge where you acquired the specific information that you discuss and refer to within the text of your paper.
Documenting sources: To avoid plagiarizing, you must give credit to those authors and sources from which you obtained information or ideas. Consequently, along with the bibliography, you will need to document quotations, text which you reword or rephrase, and summaries of text or ideas that you incorporate into your own work. In writing your paper, should you include information from a book, article, or website without acknowledging the original material, you may be accused of PLAGIARISM.
A "Parenthetical Reference" is one method of documenting a source of information. The examples of parenthetical references provided below follow the MLA style. However, should your instructor require you to use footnotes or endnotes instead of parenthetical references or require you to follow a style of documentation other than MLA, you can refer to the appropriate manual for instructions. Consistency in following the rules for a particular style is important. For additional examples of parenthetical references, refer to the MLA or APA style manuals. Other style manuals such as Chicago and Turabian provide examples of footnotes and endnotes for documentation.
- Quoting: When you quote a source or refer to specific statements or ideas in another document, you will mark the text with quotation marks and cite the author’s last name and the page number(s) of the source in parentheses at the end of the quoted text.
Example: "The purpose of parenthetical references is to document a source
briefly, clearly, and accurately" (Trimmer 10).
In the example above, Trimmer is the author of the text that is quoted; the quote is
taken from page 10 of that text. The parenthetical reference briefly documents the
quote and refers the reader to the bibliography for the complete citation.
- Paraphrasing another's words or text without specifically quoting also requires documentation. Paraphrase means that you are rewording the text in your own words but maintaining the meaning of the original text.
Example: Trimmer states that if you mention the author’s name in your report after
referencing that author's ideas, you need only give the page number(s) of the
source in parentheses (10).
In this example, Trimmer's name is included in the text so that only the page is
necessary to include at the end of the paraphrased material. This brief
parenthetical reference provides enough information about the source to lead the
reader to the bibliography for the complete citation.
- Summarizing a text or idea: At times you will want to refer to the entire book or article in your paper or you may decide to summarize an author's ideas. In this situation you do not need to use quotation marks, but you will need to acknowledge the source within the text or at the end of the sentence or paragraph with a parenthetical reference.
Example: Throughout Trimmer's 1984 guide to the new MLA style, he repeatedly
emphasizes the necessity of complete and thorough documentation.
American Psychological Association. Publication Manual of the American Psychological
Association. 5th ed. Washington, DC: APA. 2001.
Gibaldi, Joseph. The MLA Style Manual and Guide to Scholarly Publishing. NY: Modern
Language Association of America, 1998.
"Plagiarism." Oxford English Dictionary. 2006 Draft Revision. England: Oxford University
Press, 2007. Saint Michael's College Library, Colchester, VT. 28 May 2007.
Trimmer, Joseph H. A Guide to the New MLA Documentation Style. Boston: Houghton
Turabian, Kate L. A Manual for Writers of Term Papers, Theses, and Dissertations. 6th ed.
Chicago: University of Chicago Press, 1996.
University of Chicago Press. The Chicago Manual of Style. 15th ed. Chicago: University of
Chicago Press, 1993. |
CTAHR Cooperative Extension Service
“To be a successful farmer one must first know the nature of the soil.”
— Xenophon, Oeconomicus, 400 B.C.
The importance of soils to crops was recognized long ago and knowing one’s soil continues to be an essential part of being a successful farmer today. It is quite remarkable that the Hawaiian Islands, a small group of islands over 2500 miles from the nearest continent and geologically very young, has soils belonging to all of the 11 soil orders known. Because rainfall has such an impact on soil development, and Hawai‘i’s rainfall can be so variable from one part of an island to another, the soils can be variable within a very short distance. Knowing, and understanding the soil type and characteristics of the soil is essential to managing the fertility of the soil and minimizing nutrient waste.
Agricultural Diagnostic Service Center
Get your soil tested by CTAHR.
Soil Rx for Soils and Crops
Deenik Soils Information
Hawai‘i Soil Surveys
Soil and Crop Management
Free downloadable fact sheets in pdf format.
Soil Management Collaborative Research Support Program (SM CRSP)
Improve agroecosystem performance through rectification of soil nitrogen, soil phosphorus, soil acidity, soil water and soil degradation constraints using an integrated nutrient management approach. This USAID project began in 1997.
Soil Management for Maui County |
An estimated 39% of all energy consumption in the U.S. comes from the production of electricity to power homes and businesses. A great deal of this energy consumption pollutes our air and water, and it creates hazardous wastes that require disposal. Solar panels help eliminate this pollution by capturing the energy of the sun and using that energy to power common household devices such as lights and heaters. Understanding the effects of traditional energy sources like coal and oil helps people understand the ways that solar panels help to protect the environment.
Reducing Air Pollution
Traditional sources of electricity, such as coal and oil, emit byproducts such as carbon dioxide, sulfur dioxide, nitrogen dioxides, particulate dust and mercury. Each of these byproducts is associated with known environmental challenges, including global climate change, acid rains, smog and contaminated fisheries, according to the Union of Concerned Scientists. Solar panels can reduce this pollution by lowering the energy needs of every dwelling they power. Even if solar panels are only used to power household lighting, if applied across a number of homes, solar panels could lead to a significant decrease in the emission of dangerous byproducts.
Reducing Water Pollution
Coal and nuclear power sources also create environmental challenges along waterways. An estimated 72% of all toxic water pollution in the U.S. is derived from coal-based electricity production, which releases arsenic, selenium, boron, cadmium and mercury into waterways. A good deal of this pollution can be prevented by requiring new technological filtration, but only one out of five coal-based power plants in the U.S. uses such technology, according to The Sierra Club environmental organization. The U.S. Nuclear Regulatory Commission also reports that a nuclear isotope know as tritium is commonly released into groundwater supplies by nuclear power plants in addition to large amounts of warm and low-oxygenated water that get released back into local rivers. By reducing the need for power from such plants, solar panels can help reduce the continuation of these environmental contaminates.
Reducing Hazardous Waste
The process of burning in coal- and oil-based power production leads to the creation of byproducts such as coal ash and oil sludge, which contain dangerous amounts of metals, according to the U.S. Environmental Protection Agency. Much of this waste is taken to landfills or hazardous waste disposal sites where it is stored for many decades, but an estimated 42% of coal plant waste ponds and landfills where this waste is disposed do not have a protective lining to prevent waste leaching into the environment, according to the Union of Concerned Scientists. Solar panels help reduce this waste by lowering the amount of energy these coal- and oil-based energy plants need to produce.
Reducing Resource Mining
Some of the most profound environmental challenges associated with coal-based power plants occur before power is even produced, when the coal is mined. An estimated 60% of coal mined in the United States comes from surface mining, which is a process that removes the entire top of a mountain in order to mine the coal below. Through this practice more than 300,000 acres of forest and 1,000 miles of streams have been destroyed, according to the Union of Concerned Scientists. Even more energy is spent transporting the mined coal to power plants that burn the coal to produce electricity. Solar panels can help reduce the need for coal-based energy, which in turn will lower the demand for surface-mined coal. |
Literary terms can be challenging to English language learners who may feel that mastery of reading skills, as well as good writing skills depends mostly on understanding sentence structure. This guide to basic literary terms will helps students begin exploring different writing techniques, as well as help them become familiar with common stylistic writing tools that will help them in academic English settings, or English tests such as the TOEFL or Cambridge exams.
Definition: Allegory is a style of writing that makes a connection between events, people and places that go beyond the story or sentence.
Example: In Animal Farm by George Orwell the animals and the organization of is an allegory for society in general.
Definition: The voice or 'feeling' that an author gives to his writing. The author's tone may be ironic, sarcastic, witty, light, serious, and so on.
Example: The howling wind continued to blow as John made his way down the dark lane with only a flashlight to show the way. - In this example, the author's tone could be called 'erie', 'frightening' or 'spooky'.
Definition: A book about someone's life written by that person.
Example: Lee Iacocca's autobiography has provided inspiration for thousands of hard working business people.
Definition: A book about someone's life written by another person.
Example: Steve Job's biography shows both his genius and vengeful character.
Definition: A short chapter or selection at the end of a story or other written work.
Example: Many popular books that are republished many times include epilogues which update the reader on the current thoughts of the author.
Definition: A story that is not true.
Example: Novels, epic tales and science fiction are all examples of fiction. They inspire us to think and entertain, but do not relate factual stories.
figure of speech
Definition: Words or phrases that have a different meaning than what the words literally mean. Idioms, metaphors and similes are all figures of speech.
Example: Every cat has nine lives.
Definition: The type of written work. There are a wide range of genres which include: science fiction, historical fiction, instructional manuals, comedy, poetry, horror, etc.
Example: The genres I most prefer reading are historical and science fiction, as well as autobiographies.
Definition: A phrase that is used in everyday speech that is not literal in meaning. Idioms can also be called figures of speech.
Example: I got a letter from Tim out of the blue. (out of the blue = unexpectedly)
Definition: The use of descriptive language to create a scene in the reader's mind
Example: She opened the creaking door silently and entered the damp, dusty room that looked like something out of the middle ages.
Definition: Making an educated guess about something based on facts and other information known about something.
Example: The author made a number of inferences as to the likely whereabouts of the criminal based on evidence about his movements around the country.
Definition: Indirectly stating something bad about another person.
Example: The fact that senator received most of his donations from corporations might explain why they've had little problem with regulations.
Definition: The use of words to convey a meaning that is opposite of its literally true. Authors also use ironic situations to show that something has turned out quite unexpectedly.
Example: Fancy meeting you at the party! (A character might say this to a person who had told him that he couldn't attend a party)
Definition: The main idea is also referred to as the thesis in essays or topic sentence in a paragraph.
Example: (In the first paragraph of an essay) Parents must strive to help children strike a balance between study and play.
Definition: A word or phrase which is used to convey representative or symbolic information of something else, but it not literally true.
Example: My daughter's room is a disaster area.
Definition: The reason a character does something.
Example: The main character seemed to desire wealth and status over everything.
Definition: A tone that is neither biassed for or against a subject.
Example: Most news stories strive to take a neutral tone. However, editorials are generally biased for or against the subject of the article.
Definition: Written work that is true, or strives to reflect reality.
Example: Scientific research might be speculative in nature, but it is certainly non-fiction.
Definition: Persuasive writing tries to convince the reader of the author's point of view either for or against something.
Example: (from an essay) In this essay, you will discover why it is essential to fund arts programs for our children.
Definition: The plot is the general story of a written, filmed or theatrical work.
Example: The plot becomes confusing when you discover that the hero used to be a double agent.
point of view
Definition: Point of view refers to how a story is narrated. First person point of view uses I, me, my, etc., while third person point of view employes he, she, her, him, and so on.
Example: (first person narration) I've lived in this town a long time. Come with me, and I'll show you both the good and the bad.
Definition: A beginning to a written or performed theatrical work.
Example: The prologue provides the background essential to understanding what happens during the course of the novel.
Definition: Word for word phrase, sentences, etc. that is placed in quotation marks coming from another person.
Example: Thomas stated, "I don't understand why you find him so attractive."
Definition: Similar to irony, sarcasm is used when stating the opposite of what someone feels in order to criticize the actions of another.
Example: That was SO helpful Anna. Thank you for your contribution. (the character feels Anna's information was not helpful)
Definition: The order in which things occur in a story, or other written work.
Example: First, second, third, next, then, finally, etc.
Definition: The place in which a story takes place.
Example: The setting in the novel is in mid 19th century New York in homes of the upper classes.
Definition: A comparison of two things using 'as' or 'like'.
Example: She felt as free as a bird.
Definition: The main idea of a paragraph.
Example: The ease with which students can learn new vocabulary indicates how successful they will be on tests. |
Vitreous body: functions, structure, diseases
To understand what functions the vitreous body performs, it is necessary to understand its role in the system of organs of vision. This anatomical structure is located behind the lens of the eyeball. On the outside, the vitreous body of the eye is bounded by a thin membrane film, with the inner - divided into tracts( channels).
If you look closely, how the eye works, you can see that the vitreous humor makes up most of the contents of the eyeball. It comes in contact with the outside of the plane of the ciliary structure, and at the back - with the disc of the optic nerve. In humans, the vitreous body affects the full maturation of the retina and its sufficient blood supply. It has no vessels and nerves. The consistency of the gel-like medium is facilitated by the process of unilateral osmosis of nutrients from the liquid produced inside the eye. The vitreous humor has a low bactericidal activity, so leukocytes and antibodies are detected in it not immediately after infection, but after a w
From the section of ophthalmology "Anatomy of the eye" you can get a detailed idea of the volume of the vitreous. It turns out that it is no more than 4 ml, while more than 99% of this amount consists of water. Thanks to the liquid filling, the volume of the eyeball is unchanged.
is formed The formation of this gel-like substance occurs in the early stages of intrauterine development. The initial function of the vitreous body was to provide power to the lens and anterior segment through the hyaloid artery. After the lens of the fetus is completely formed, this vessel disappears with time, and the child is born without it. But as is known, any rule has exceptions: in some cases, the hyaloid artery is found in adult people in the form of transformed strands of various sizes.
What is needed for
The main function of the vitreous is to transfer it to the intraocular fluid produced by the ciliary eye. Part of the substance comes from the back chamber, getting directly into the vessels of the fiber and the optic disc. In front of the vitreous body there is a small depression, which corresponds to the location of the back of the lens. It is this semi-liquid substance that guarantees its strong connection with the membranes of the eye( ciliary epithelium and internal border membrane).
In addition, thanks to the vitreous body, which retains its shape even under the influence of the load, it is possible to carefully peel off the shells without further spreading. The cortical layer of this part of the eyeball consists of hyalocytes synthesizing reticulin and hyaluronic acid, necessary to maintain the correct consistency. It often forms microcavities due to rupture of the retina, which, in turn, contributes to the development of its detachment in the future.
How it changes with age
If you pay attention to how the eye is made in an adult, then when you examine the vitreous body, changes in its structure will become noticeable. In newborns this substance is a homogeneous gel-like mass, but over the years it is reborn. With the period of growing up in a person, the individual molecular chains coalesce into larger compounds. Gel-like mass is transformed into an aqueous solution and accumulation of molecular compounds over time. Changes are also reflected in the quality of vision: these floating groups are seen by a person in the form of flashing points, "flies".At the final stage of this process, the vitreous body becomes turbid and flakes off from the retina, which is manifested by an increase in the amount of molecular suspension. In itself, this violation does not pose a significant threat, but in isolated cases it may entail a retinal detachment.
What role plays for the vision of
To fulfill all of its functions, the vitreous begins from the moment a person appears. The physiological purpose of this department of the eyeball is as follows:
- Due to its absolute transparency of gel liquid, light rays penetrate directly to the surface of the retina.
- Due to the unique structure of the vitreous body, intraocular pressure indicators remain stable, which is crucial for the implementation of metabolic processes and the normal functioning of the visual organ.
- The vitreous provides optimum retina and lens arrangement.
- In the case of sudden movements or trauma of the pupil, the functions of the gel-like liquid substance are designed to compensate for intraocular pressure differences.
- The spherical shape of the eye is the "merit" of the vitreous.
Diseases that can occur
The clouding of the semiliquid structure can proceed in different ways. In most cases, pathological changes occur behind the cornea and the lens. The vitreous humor in this case undergoes pre-entensal turbidity. In other cases, changes occur in the central part of the organ or are combined.
All diseases of the vitreous are conventionally divided into congenital and acquired. The first group includes such pathologies:
- Presence of remains of the embryonic artery, providing the supply of the lens in the womb.
- Primary persistence of the vitreous.
With age, the development of a number of pathological phenomena and diseases of the vitreous humor is possible. These include:
- liquefaction consistency;
- hernial formation;
- hemophthalmus( hemorrhage).
Often in patients, the inflammation of the vitreous body of the eyeball is diagnosed - endophthalmitis or panophthalmitis. A more rare phenomenon is the posterior detachment of the substance, due to which the membrane film is broken at the attachment points. Against the background of the progression of the pathology, the vitreous body spreads between the retina and the posterior hyaloid membrane, which leads to a rapid decrease in visual acuity.
is manifested Speaking of the symptoms of anxiety for patients with diseases of the vitreous structure of the eye, it is worth noting that they are manifested, as a rule, by floating point opacities. The patients are looking for blots, threads, flies of various sizes. With regard to a noticeable impairment of vision and pain in the eyes, these signs often occur with hemorrhage and inflammation of the vitreous.
In the case of a decrease in the vitreous function of the patient may not long to worry about any symptoms. In this case, the likelihood that the disease will cause a deterioration in sight is quite large.
Causes of Vitreous Hypertension
To provoke disturbances in the functioning of the visual system, nervous experiences, constant stress, and impairment of visual functions caused by age-related changes are possible. In the treatment of vitreous pathologies in the first place, it is important to constantly monitor the ophthalmologist and periodically conduct a comprehensive examination. Only a qualified specialist is able to appoint a competent treatment of the problem.
Patients who are over 40 years of age are considered to be at a risk for diseases of the vitreous structure of the eye. If vision problems appear at an earlier age, a person needs to reconsider his lifestyle and, if possible, exclude provoking factors.
What is the destruction of
This is a destruction of the vitreous, which leads to the emergence of a very pronounced symptomatology. The filling substance becomes cloudy, which is perceived by the patient as the occurrence of floating jamming - villi, strips, dots, nodules. The process of destruction of the vitreous body is most often caused by violations of the blood supply of this zone, endocrine system diseases, eye and head injuries, stress. Of course, age factors also play a role.
For characteristic destruction, chaotic opacities are characteristic. In this case, visual disturbances can occur in front of the patient in any area of visibility. In the process of destruction of the vitreous structure of the eye appear moving transparent inclusions, which have clear boundaries. In one place they do not stand and move after the pupil. The functions of the visual organs do not suffer at the same time, therefore, the treatment of destruction is extremely rare, only in the presence of critical deterioration.
To date, therapy involves splitting the areas of turbidity with a laser. It is important to note that any surgical intervention on the vitreous can cause complications.
The risk of detachment and hemorrhage
In both cases there is a risk of loss of vision, and therefore to any of the pathologies should be taken seriously. When detached, short-term flashes, glare, lightning or black dots appear before your eyes. The process of separation of the vitreous humor is safe for the patient. You can manage without intervention when the symptoms have a weakly expressed lubricated character. But if you do not take any medical measures, a decrease in visual function is inevitable.
In addition, in cases of ophthalmology, cases of hemorrhage into the vitreous humor are known. Even if this disease does not bring any discomfort, the patient should regularly visit a specialist. Repeated episodes of hemorrhage lead to loss of vision, so the primary task of the treating doctor is to prevent relapses and maintain the vitreous body.
To reveal the vitreous body pathology, ophthalmologists perform the following types of diagnostic studies:
- Visometry is a "standard" procedure that allows to determine visual acuity in a patient. Such a study was conducted by everyone: with the help of tables and posters, in sufficient light, the optometrist checks the visual functions of the right and left eyes.
- Biomicroscopy allows to assess the condition of the anterior part of the vitreous body under a microscope.
- Ophthalmoscopy is designed to detect changes in the posterior part of the vitreous humor.
- Optical coherence tomography involves revealing the pathology of the retina for detachment.
- Ultrasound - a detailed study of the state of eyeballs.
Before beginning treatment of any disease of the vitreous of the eye, it is important to accurately differentiate it from other pathologies by the type of detected changes of a degenerative or inflammatory nature.
scientists In the presence of diagnosed disorders from the nervous system, patients are advised to undergo surgical treatment of the vitreous. This operation is called vitrectomy. After removal of the gel-like liquid, the department is filled with a non-natural substance similar in physical characteristics.
To date, ophthalmologists have developed methods for the synthetic cultivation of hyalocytes. They are planned to be used to create a substitute for the vitreous that has changed its structure. The analogue should be free from the drawbacks of the silicone fluid that is implanted in patients after vitrectomy today. |
Note: This lesson was originally published on an older version of The Learning Network; the link to the related Times article will take you to a page on the old site.
Teaching ideas based on New York Times content.
Overview of Lesson Plan: In this lesson, students consider the educational value of video games by examining what books and video games have in common and debating whether playing video games leads to improved literacy skills.
Amanda Christy Brown, The New York Times Learning Network
Suggested Time Allowance: one or two class periods.
Activities and Procedures:
1. WARM-UP/DO NOW: Show students the photo accompanying the article “Literacy Debate: Online, R U Really Reading?”, which is the first article in the Times series “The Future of Reading,” at www.nytimes.com/2008/07/27/books/27reading.html. Ask, “What are the people in this photo doing?” After a brief discussion, provide the information from the photo caption, which indicates that the family members are all reading. If any students guessed that the children might be playing video games, ask them why they thought so.
Then ask, What does it mean to “read”? What kinds of things in addition to words can be “read”? Can playing video games be considered “reading”? Why or why not? Do video games and books have anything in common?
2. ARTICLE QUESTIONS: As a class, read and discuss the article “Using Video Games as Bait to Hook Readers” by Motoko Rich ( www.nytimes.com/learning/teachers/featured_articles/20081009thursday.html), focusing on the following questions:
a. Do you agree or disagree with PJ Haarsma’s statement that “You can’t just make a book anymore”?
b. According to the article, what do books and video games have in common, and how are they different?
c. How might video games serve as a “gateway drug for literacy”?
d. What skills are essential to literacy? Which of these do video games foster?
e. What useful skills might video games teach that books do not?
3. ACTIVITY: Tell students they will be engaging in an instant debate about whether or not video games might lead to increased literacy skills. Divide the class into two equal groups without regard for any individual’s position on the topic. (If your class is large, you may wish to assign multiple smaller groups to each position.) Assign each group one of the following positions:
– “Yes, playing video games leads to improved literacy skills.”
– “No, playing video games does not lead to improved literacy skills.”
Using responses to the last two article questions as starting points, have students spend 10 minutes in their groups preparing a single student “representative” to present on their assigned position for three minutes. Remind them to mine the article for material and examples to use in their argument. They can record their pro and con evidence on the writable Debatable Issues PDF handout at graphics8.nytimes.com/learning/teachers/studentactivity/DebatableIssues.pdf.
Let them know that students in the same group can collaborate in order to use their presenting time wisely and avoid repetition. If students have access to computers, they might also look at readers’ responses to the article for ideas: community.nytimes.com/article/comments/2008/10/06/books/06games.html.
While each group is presenting, have the other students take notes about how they would like to rebut their opposition’s argument. After each group has presented, have the groups come together again for five minutes to prepare another presenter to give a two-minute rebuttal, by a different group “representative.” Once their preparation time is up, have the new presenters give their rebuttal arguments.
When all presentations are complete, bring the class together for a wrap-up discussion. Did anyone change their personal position on whether or not video games might lead to increased literacy during the debate? If so, what changed their minds? If their position stayed the same, was it strengthened by any of the arguments presented during the debate? Were there any arguments left out of the debate that could have made one position more persuasive?
4. FOR HOMEWORK OR FUTURE CLASSES: Individually, students explore one of the game links that accompany the article at www.nytimes.com/2008/10/06/books/06games.html: “The Softwire’s Rings of Orbis,” “The 39 Clues” or “Vroengard Academy.” In a one-page paper or journal entry, they reflect on this experience by answering the following questions:
– Which game did you choose to explore and why?
– Which, if any, of the literacy skills discussed in class did you use as you played the game?
– Are you intrigued by the world of the game enough to read the book? Why or why not?
– Do you think this game will inspire other people to read? Why or why not?
Related Times Resources:
- ADDITIONAL TIMES ARTICLES AND MULTIMEDIA:
“Literacy Debate: Online, R U Really Reading?”
The first in series of articles entitled “The Future of Reading,” looking at how the Internet and other technological and social forces are changing the way people read.
Image: “The New Readers”
A graphic showing how online reading builds and expands upon traditional reading skills.
Web Extra: “Further Reading on Reading”
A collection of articles and links about the changing face of readers and reading.
- LEARNING NETWORK RESOURCES:
Lesson Plan: Game On
Predicting Future Trends in Gaming
Lesson Plan: Flight of the Imagination
Creating a Plan for a Fantasy Video Game
Teaching with The Times: Literature
A collection of materials for complementing classroom curriculum in literature.
Teaching With The Times: Language and Usage
A collection of materials for complementing classroom curriculum in language skill development.
- ARCHIVAL TIMES MATERIALS:
“School Computers Emulate Games To Capture the Attention of Pupils”
Historical article from May 25, 1993.
- TIMES TOPICS:
Reading and Writing Skills
Video Games and Consoles
Education and Schools
- OTHER RESOURCES:
“The Softwire’s Rings of Orbis”
The site for the online role-playing game based on PJ Haarsma’s “Softwire” novels.
“The 39 Clues” Site for the Scholastic game “The 39 Clues,” a complement to the book series of the same name.
Vroengard Academy “Vroengard Academy,” a game site affiliated with fantasy books by Christopher Paolini, including “Eragon.”
Math – Read the article “Video Game Helps Math Students Vanquish an Archfiend: Algebra” by Winnie Hu at www.nytimes.com/2008/10/08/nyregion/08video.html. Discuss whether video games can help with numeracy skills as well as literacy skills.
Economics – Watch Rick Riordan discuss “The 39 Clues” on Scholastic’s Web site (scholastic.com/aboutscholastic/mediaroom/opk/39clues/39Cluesvideoaug.html). What does he mean when he talks about a “multiplatform” approach? Why is this an effective marketing strategy? Choose a book you love and create a “multiplatform” approach to market it.
Media Studies – Read the Atlantic Monthly article “Is Google Making Us Stupid?” (www.theatlantic.com/doc/200807/google). Based on this article and the Times “The Future of Reading” series, create a slide presentation about the expanding definition of literacy in the 21st century. Is technology making us smarter? Better writers? Readers? Thinkers? Make sure to include material on whether fluency with video, audio, image-making, social networking, texting and instant messaging are all part of what a literate person will have to be able to do in the 21st century. Then, show the presentation to a group of adults, like local parents, a parent-teacher association or school faculty and facilitate a discussion on how the school environment might begin to embrace media more fully to teach literacy skills for students growing up in the digital age.
Academic Content Standards:
Grades six to eight and nine to 12.
Language Arts Standard 1 – Uses the general skills and strategies of the writing process.
Language Arts Standard 4 – Gathers and uses information for research purposes.
Language Arts Standard 5 – Uses the general skills and strategies of the reading process.
Language Arts Standard 7 – Uses reading skills and strategies to understand and interpret a variety of informational texts.
Language Arts Standard 8 – Uses listening and speaking strategies for different purposes.
This lesson plan may be used to address the academic standards listed above. These standards are drawn from Content Knowledge: A Compendium of Standards and Benchmarks for K-12 Education; 3rd and 4th Editions and have been provided courtesy of the Mid-continent Research for Education and Learning in Aurora, Colorado. |
Higher velocity implies low pressure. So in whirlpool the velocity of water have to decrease with radius in order to have force towards its center.
My question is how is explained whirlpool formation in simple words.
A fluid motion in a vortex creates a dynamic pressure that is lowest in the center increasing radially ($P \propto r^2$). The gradient of this pressure that forces the fluid to rotate around the axis.
This is usually represented by a vector called vorticity, and defined by $\omega = \nabla \times v$.
In simple terms, this means that the fluid is rotating around a certain point. If you placed a small ball on the flow, you would observe that is would rotate about the center and the direction of vorticity vector is given by the right-hand rule.
The formation methods are many. For example, in the wake of an engine, there air has been given rotational momentum and will continue to have vorticity. When two opposite flows meet, they can also form as in planetary systems, like tornadoes or Jupiter great red spot |
The Renaissance Period Facts Information Worksheets
Kindergarten Pe Games Lesson Plans Elementary School ..
Lesson Planning Secrets Made Easy for Kindergarten ..
Teach and Shoot Share the Love
parts of a plant lesson plan grade 3 powertation
art lesson plan Dances
Advice on Creating Multiage Multigrade Curriculum Calendars ..
Business Math Worksheets High School Junior High Math ..
sample lesson plan for elementary Konipolycodeco
Holiday Songs For The Recorder 2 Hanukkah Elementary ..
Grade 1 School Garden Lesson Plan Potential GardenBased ..
Go Math 1st Grade Lesson Plans Go First Grade Math ..
10 Ways to Use Google Docs in Physical Education
Common Core Math Template Third Grade Guided Math Lesson ..
Early Explorers Lesson Plans 5th Grade My New Year ..
Renaissance Lesson Plans 9th Grade
Bowling Lesson Plans for Middle ..
Then and Now Lesson Plans 1st Grade
Visual Arts Lesson Plans Elementary ..
Parts of a Plant Lesson Plan Grade ..
4th Grade California History Lesson ..
Fourth Grade Writing Workshop Lesson ..
Business Math Lesson Plans for ..
Grade 6 Math Lesson Plans
Esl Lesson Plans for High School
Family Traditions Lesson Plans ..
Go Math Lesson Plans First Grade
Physical Education Lesson Plans ..
Third Grade Math Lesson Plans Common ..
Explorers Lesson Plans 5th Grade |
Lead Teacher – Mrs Hannah Orton
What is phonics?
Phonics is a way of teaching children to read quickly and skilfully. Children are taught how to recognise the sounds that each individual letter makes; identify the sounds that different combinations of letters make – such as ‘sh’ or ‘oo’ and blend these sounds together from left to right to make a word. Children can then use this knowledge to decode words.
How is phonics taught at Hertford St Andrew?
All phonics in EYFS and KS1 is taught following the Letters and Sounds document. We have adopted the suggested daily teaching sequence set out in ‘Letters and Sounds’; Introduction, Revisit and Review, Teach, Practise, Apply and Assess. From Reception to Year Two children are taught in their classes with additional phonic interventions for those children needing further support. Children in Key Stage 2 requiring further phonics support are taught in small, daily intervention groups.
Teaching is multi-sensory, encompassing simultaneous visual, auditory and kinaesthetic activities to enliven core learning. Phonics is taught in short, briskly paced sessions and then applied to reading and writing. All activities are well matched to the children’s abilities and interests, and all classroom environments have an age appropriate display concentrating on both sounds and key words. At Hertford St Andrew we provide opportunities to reinforce and apply acquired phonic knowledge and skills across the curriculum and in such activities as shared and guided reading and writing.
Phonics Screening Check
Year 1 children will take part in a Phonics Screening Check in June. The check is designed to confirm whether children have learnt phonic decoding to an appropriate standard. It will identify those children who need extra help to improve their decoding skills. The check consists of 20 real words and 20 pseudo-words that a pupil reads aloud to the teacher. Results will be reported to parents. |
Critics like to blame the internet for the perceived “dumbing down” of today’s children, but social and commercial agents of change kicked in long before computers began to take up so much of their time.
In his history of child’s play, Howard Chudacoff, a cultural historian at Brown University, observes that before the middle of the 20th century, the usual pattern of children’s play involved roaming the neighborhood more or less unsupervised and making up games. He says, “[Children] improvised their own play; they regulated their play; they made up their own rules.”
In that kind of imaginative play, a tree branch could be whatever the scenario called for: sword, rifle, lance, or walking stick. A cardboard box could be a house, castle, fortress or cave.
Chudacoff points out that during the 1950s, play changed owing to the commercialization and co-opting of play by manufacturers who supplied children with specific toys for specific play scenarios. Instead of using a tree branch for a variety of games, children were provided with plastic light sabers to play Star Wars and ready-made tracks and roads to play with toy cars.
Chudacoff also acknowledges the changes to children’s play patterns caused by concerns about safety. Neighborhoods became less safe as drug use spread to a wider portion of the population. Parents looked for supervised play environments where children could be safe.
Not surprisingly, such radical changes in the way children play have affected the way children think and behave. The loss of unfettered imaginative play has affected the cognitive and emotional development of children. Today’s children are lacking in a cognitive skill called executive function.
A central element of executive function is the ability to control the emotions, resist impulses, and exert self-control.
One reason that today’s children do not meet academic expectations may be related to the fact that their ability to self-regulate has declined dramatically during the past six decades.
In the late 1940s, psychologists conducted a study whose purpose was to test the self-regulation of children aged 3, 5, and 7. For one of the exercises, the children were asked to stand perfectly still. The three-year-olds could not stand still at all. The five-year-olds could stand still for about three minutes. The seven-year-olds could stand still for as long as the researchers asked them to.
A recent study replicated the earlier one. According to psychologist Elena Bodrova at Mid-Continent Research for Education and Learning, the results indicated a steep decline in the ability of children to control themselves.
Like the three-year-olds of the 1940s, today’s five-year-olds could not stand still. Unlike the seven-year-olds of the 1940s who could stand still indefinitely, today’s seven-year-olds could stand still for barely three minutes.
Self-regulation is a crucial skill, not just in the learning process, but for social functioning in general.
Interestingly enough, one of the stated goals of the Common Core Standards is to encourage “student-directed learning.” One can only wonder how that goal can be accomplished with large numbers of children who lack the ability to control their impulses. |
This crater features sand dunes and sand sheets on its floor. What are sand sheets? Snow fall on Earth is a good example of sand sheets: when it snows, the ground gets blanketed with up to a few meters of snow. The snow mantles the ground and “mimics” the underlying topography. Sand sheets likewise mantle the ground as a relatively thin deposit.
This kind of environment has been monitored by HiRISE since 2007 to look for movement in the ripples covering the dunes and sheets. This is how scientists who study wind-blown sand can track the amount of sand moving through the area and possibly where the sand came from. Using the present environment is crucial to understanding the past: sand dunes, sheets, and ripples sometimes become preserved as sandstone and contain clues as to how they were deposited. [More at link] |
National Standard: 1 Singing, alone and with others, a varied repertoire of music.
4 - Composing and arranging music within specified guidelines.
6 - Listening to, analyzing and describing music.
7 - Evaluating music and music performances.
Objective: The students will demonstrate their understanding of writing lyrics for a twelve-bar blues pattern by writing two verses of a blues song and then recording themselves singing over a stock blues pattern using Apple¹s new iLife application, Garage Band.
Materials: A networked keyboard lab with computers and headphones.
Writing Blues Lyrics Student Handout.
Garage Band on each computer.
How To Set Up A Vocal Track instruction sheet.
Word Processing software (Microsoft Word or AppleWorks).
1. Teacher distributes the Writing Blues Lyrics handout.
2. Teacher introduces the concept of writing lyrics for a twelve-bar blues pattern. Teacher reviews the handout with the students to answer any questions that they might have.
3. Students sing the example on the handout while the teacher plays the demo file named ³Shufflin¹ Blues² which comes with Garage Band. The GEC3 should be set to the Lecture Mode for this part of the lesson.
4. Using the handout and the word processing software, the students will work in groups of two to write two verses to their own blues song. The students should follow the guidelines given in the handout to determine the subject and the amount of syllables in their verses. The GEC3 should be set to the Practice Mode for this part of the lesson.
5. Teacher monitors students¹ progress by using the eavesdropping function of the GEC3.
6. After sufficient time, the students will share their lyrics to the class. The GEC3 should be set to the Lecture Mode so that each group can share their lyrics with the class.
7. Students discuss each set of lyrics to determine how successfully it fulfills the guidelines for writing blues lyrics on the student handout.
8. After the discussion, the teacher will ask the students to open Garage Band on their individual computers. Once the application is open, the students will open the demo file entitled ³Shufflin¹ Blues².
9. Teacher will then review the instructions for creating a new vocal track on Garage Band with the students. The GEC3 should be set to Lecture Mode for this part of the lesson.
10. Students will then set up a new vocal track and use the microphones on their headsets to record themselves singing their blues lyrics over the ³Shufflin¹ Blues². The GEC3 should be set to the Practice Mode so that the teacher can monitor student progress.
11. Students should be given ample time to make as many takes as necessary to record their ³best² performance.
12. After sufficient time students will perform their recordings for the class for discussion. The GEC3 should be set to the Lecture Mode for this part of the lesson.
13. Students should save their recordings on their computers or to a file server.
Extensions: Using the ³Export to iTunes² function of Garage Band (in the file menu), students could export their compositions to iTunes so that they can burn their songs onto a CD-R.
Students could create their own blues accompaniment using the sequencing function of Garage Band instead of the demo song. |
Pre Natal Checkup
Prenatal diagnosis or prenatal screening (note that "Prenatal Diagnosis" and "Prenatal Screening" refer to two different types of tests) is testing for diseases or conditions in a fetus or embryo before it is born. The aim is to detect birth defects such as neural tube defects, Down syndrome, chromosome abnormalities, genetic diseases and other conditions, such as spina bifida, cleft palate, Tay Sachs disease, sickle cell anemia, thalassemia, cystic fibrosis, Muscular dystrophy, and fragile X syndrome. Screening can also be used for prenatal sex discernment. Common testing procedures include amniocentesis, ultrasonography including nuchal translucency ultrasound, serum marker testing, or genetic screening. In some cases, the tests are administered to determine if the fetus will be aborted, though physicians and patients also find it useful to diagnose high-risk pregnancies early so that delivery can be scheduled in a tertiary care hospital where the baby can receive appropriate care. |
To gain an understanding how superconductivity might be happening we first
need to look at how conventional electrical resistance occurs. According to accepted
theory, resistance to current flow occurs because electrons keep bumping into
atoms as they flow through a conductor. Thermal activity also plays a role. As a substance
increases in temperature its atoms move more vigorously. This movement increases an
electrons resistance to flow because it increases the number of collisions.
Conversely as a substance cools the number of collisions, and hence the resistance,
The BCS Theory of Superconductivity
So how does superconductivity work? The standard explanation is BCS
An alternative explanation of Superconductivity
Is BCS correct? It is difficult to say, especially as it involves quantum
mechanics explanations that lay outside the boundaries of our real-world (and common
sense) experience. So Id like to present an easier-to-swallow alternative that fits
within classical mechanics.
Thus it is impossible for electrons to move through a compound without
experiencing a retarding force, even at absolute zero temperatures. This retarding force
should bring the electrons to a halt. Yet the electrons dont slow down; they
continue at a constant speed. The only way this is possible is that the electrons are
receiving a counter force in their forward direction.
The Duck Hunter
Imagine there lived a duck hunter. This hunter was not very successful
because, for some reason, he only shot ducks that flew directly overhead. The ducks flew
at varying speeds, some even hovering occasionally, and as they passed he would fire a
The duck in the left frame is hovering, i.e. moving with zero velocity,
and the duck on the right is flying with a constant velocity toward the right. As the
bullets pass through they exert force on the ducks. In what direction will these forces
Shooting at Electrons
What does this have to do with electrons? In the preceding chapters
electric fields were likened to bullets. Tiny electric field bullets emerge
from all sides of an electron and travel in straight lines. When these bullets strike
another electron they exert a force in the direction of their motion.
The frame on the left shows two electrons standing motionless above and
below each other. The frame on the right shows the bottom electron motionless and the top
electron moving to the right at a constant velocity. What will the force be on the top
Now suppose a proton was doing the shooting instead. What would the direction of force be on an electron moving overhead? Again we expect the field to flow through it on an angle however, due to this being an opposite charge, it seems logical that the force should be in the opposite direction:
Using vector representation we can split this force into its x and y components:
Notice that most of the force is still in the vertical direction (y axis) but there is now a small force component in the horizontal direction (x axis).
The proton tunnel
Expanding on the above, imagine there are two protons, above and below each other, and the electron is at their midpoint moving horizontally to the right:
The two protons will exert a diagonal force on the electron; however the forces will not be directly opposite. In the above diagram F1 and F2 are the forces from protons 1 and 2 respectively. Taking the vector components of each force we notice that the vertical components cancel, but not the horizontal forces because theyre in the same direction. The net result is this:
The horizontal forces combine yielding a net force to the right.
As it passes between the first pair of protons, the electron will receive
a net force to the right. This will propel it to the next pair of protons, whereupon it
will receive another force to the right... propelling it to the next pair
and so on,
and so forth.
It is possible to explain superconductivity within the boundaries of
classical mechanics by considering that electrons moving at right-angles to protons
receive a small force in their direction of motion due to field lines passing though
charged particles at an angle.
Copyright © 2012 Bernard Burchell, all rights reserved. |
“The learning objectives should be used to guide teaching and learning.”
~ AP Biology CED, pg. 126
About a year ago I was fortunate enough to attend the AP Annual Conference and sit in on a session presented by the AP Biology Development Committee. A key point I took away from the session was that AP Biology Exam questions are written using the language of the learning objectives. The released free response questions and the summary of results for this year’s AP Biology test reinforce this claim.
The subject of biology is vast. AP Biology textbooks are enormous. I celebrate transparency in the new exam. The learning objectives are a guide for not simply what students should know, but for how they need to demonstrate their understanding of the most salient points in the AP Biology curriculum framework.
Here is a quote from Trevor Packer, The College Board’s Head of AP, in a recent correspondence to the AP Biology community.
“To earn a 5, students must learn the course content well enough to be able to perform the skills required in the grid-ins and the free-response section: when confronted with scientific data or evidence illustrative of the required course content, students must be able to ‘calculate,’ ‘predict,’ ‘justify,’ ‘propose,’ ‘explain,’ ‘perform,’ ‘specify,’ ‘identify,’ ‘describe,’ ‘pose a scientific question,’ and ‘state a hypothesis.’ True understanding requires that students develop the depth of understanding required to perform such tasks with accuracy and precision.”
The table below lists a refinement of the “performance statements” used to write the AP Biology learning objectives. Next to each statement is the total number of learning objectives in which that statement appears.
|Frequency of Performance Statements in the 149 AP Biology Learning Objectives||Frq|
|construct, create, describe, refine, or use representations or models to predict, analyze, describe, explain, connect, or pose questions||39|
|use evidence to make, construct, or justify a scientific claim, explanation or prediction||28|
|describe, explain, or represent connections between concepts||12|
|describe or explain how||12|
|evaluate or refine evidence, scientific questions, or hypothesis based on data||12|
|use or analyze data to predict or explain||11|
|predict consequences or effects||8|
|describe a process, theory, or example||7|
|justify the selection of data||7|
|design a plan||4|
|generate or pose scientific questions||4|
|compare and contrast||1|
Clearly, there is a connection between the words in the table above and what Trevor Packer states students must be able to do in order to demonstrate an understanding of the course content and score well on the exam. The AP Biology learning objectives are not specifications as to what students should know, but an indication of the of the manner in which they will be called upon to demonstrate what they know through application.
In the days proceeding May’s exam, there were multiple reports of feelings of consternation as students walked away from an assessment that, in the mind of many of those being assessed, did not provide enough opportunity for factual recall. I can certainly understand the frustration; most students probably had limited experience with this type of assessment. If they had been expecting the old AP Biology exam, it probably felt like a bait-and-switch. The College Board is not to blame. In their defense, I’ll reference page 126 in the AP Biology Course and Exam Description. Under the section titled How the Curriculum Framework Is Assessed are seven bulleted statements. Below are two of those bullets. They completely lack ambiguity.
- The exam will assess the application of the science practices.
- Questions on the AP Biology Exam will require a combination of specific knowledge from the concept outline as well as its application through the science practices.
In college, my biology education became completely sidetracked when I developed an interest in the social sciences. Fortunately, I learned a few ideas in my classes on social theory. One concept that comes to mind is that of praxis (not the pre-service teacher assessment coincidentally produced by ETS, makers of AP exams). Here’s how Dictionary.com defines praxis:
“practice, as distinguished from theory; application or use, as of knowledge or skills.”
I’ve always liked that word, “praxis.” I’m really glad that the AP Biology Development Committee does too. The AP Biology learning objectives are praxis.
The importance of the AP Biology learning objectives has been firmly established. Teachers of AP Biology need tools to aid them as they continue to organize and assimilate the objectives into their courses. I’m a visual learner, but I also like to manipulate information in a tangible way; I made lots of flashcards in college. And so—driven by my love for manipulatives—I formatted the 149 AP Biology learning objectives into sheets of equally sized boxes, perfect for cutting into cards. The cards can be downloaded by clicking on the images below.
Here’s a short primer on the layout of the AP Biology learning objectives.
What are Learning Objectives?
Embedded in the AP Biology Curriculum Framework are 149 student Learning Objectives. The objectives are “action” statements indicating tasks students should be able to complete after completing an AP Biology course. Learning Objectives are created by merging the AP Biology Science Practices with the statements of Essential Knowledge.
How Are Learning Objectives Formatted?
Learning objectives are coded to correspond to one of the 4 Big Ideas in the AP Biology curriculum framework. Let’s take a look at the meaning of the codes from learning objective (LO) 1.8 in the diagram below.
The Learning Objective Code tells us which of the four Big Ideas this learning objective is associated with. In this case 1.8 means that this is the eighth learning objective of Big Idea 1: The process of evolution drives the diversity and unity of life.
The Essential Knowledge Code tells us which essential knowledge and enduring understanding this learning objective is connected to. The code EK 1.A.3 ties this learning objective to Essential knowledge 1.A.3: Evolutionary change is also driven by random processes. We can also see that this essential knowledge is rooted in Enduring understanding 1.A: Change in the genetic makeup of a population over time is evolution.
The Science Practice Code indicates which science practice is being utilized in order for the student to demonstrate an understanding of the essential knowledge. The code SP 6.4 refers to Science Practice 6.4: The student can make claims and predictions about natural phenomena based on scientific theories and models.
Latest posts by Jeremy Conn (see all)
- Cell Membrane Bubble Lab Revisited - October 8, 2014
- The Next-Generation Molecular Workbench - September 3, 2013
- The Importance of the AP Biology Learning Objectives - July 15, 2013 |
The continental crust is made up of sedimentary, igneous and metamorphic rock. Sedimentary rocks are rocks that have been formed by atmospheric pressure, living organisms and gravity. They are created when sediments consolidate over eons.Continue Reading
Igneous rocks are formed by the cooling of magma, found deep in the Earth, or lava flows. Intrusive igneous rocks include granite and tonalite. These rocks work their way to the surface over millions of years. Other rocks result when lava cools on the surface of the Earth. These include rocks such as obsidian and basalt.
Metamorphic rock is rock that has transformed, also over a very long time, into another type of rock. These rocks include marble, which is made from the transformation of limestone and gneiss, which forms from schists and muscovite.
The rocks in the continental crust are some of the oldest rocks on Earth; some of them are nearly as old as the planet itself. The continental crust is also made up of 15 tectonic plates that float on the viscous rock at the very top of the Earth's mantle. The movement of these tectonic plates cause continents to come together and split apart over the life of the planet. They create mountain ranges and are responsible for phenomena such as earthquakes and volcanic eruptions.Learn more about Layers of the Earth |
1 Chapter 3! : Calculations with Chemical Formulas and Equations
2 Anatomy of a Chemical Equation CH 4 (g) + 2O 2 (g) CO 2 (g) + 2 H 2 O (g)
3 Anatomy of a Chemical Equation CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2 H 2 O (g) Reactants appear on the left side of the equation.
4 Anatomy of a Chemical Equation CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2 H 2 O (g) Products appear on the right side of the equation.
5 Anatomy of a Chemical Equation CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2 H 2 O (g) The states of the reactants and products are written in parentheses to the right of each compound.
6 Anatomy of a Chemical Equation CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2 H 2 O (g) Coefficients are inserted to balance the equation.
7 Subscripts and Coefficients Give Different Information Subscripts tell the number of atoms of each element in a molecule
8 Subscripts and Coefficients Give Different Information Subscripts tell the number of atoms of each element in a molecule Coefficients tell the number of molecules (compounds).
9 Reaction Types!
10 Combination Reactions Two or more substances react to form one product Examples: N 2 (g) + 3 H 2 (g) 2 NH 3 (g) C 3 H 6 (g) + Br 2 (l) C 3 H 6 Br 2 (l) 2 Mg (s) + O 2 (g) 2 MgO (s)
11 2 Mg (s) + O 2 (g) 2 MgO (s)
12 Decomposition Reactions One substance breaks down into two or more substances Examples: CaCO 3 (s) CaO (s) + CO 2 (g) 2 KClO 3 (s) 2 KCl (s) + O 2 (g) 2 NaN 3 (s) 2 Na (s) + 3 N 2 (g)
13 Combustion Reactions! Rapid reactions that have oxygen as a reactant sometimes produce a flame Most often involve hydrocarbons reacting with oxygen in the air to produce CO 2 and H 2 O. Examples: CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2 H 2 O (g) C 3 H 8 (g) + 5 O 2 (g) 3 CO 2 (g) + 4 H 2 O (g) 2H 2 + O H 2 O
14 Formula Weights!
15 The amu unit Defined (since 1961) as: 1/12 mass of the 12 C isotope. 12 C = 12 amu
16 Formula Weight (FW)! Sum of the atomic weights for the atoms in a chemical formula So, the formula weight of calcium chloride, CaCl 2, would be Ca: 1(40.1 amu) + Cl: 2(35.5 amu) amu These are generally reported for ionic compounds
17 Molecular Weight (MW) Sum of the atomic weights of the atoms in a molecule For the molecule ethane, C 2 H 6, the molecular weight would be C: 2(12.0 amu) + H: 6(1.0 amu) 30.0 amu
18 Percent Composition One can find the percentage of the mass of a compound that comes from each of the elements in the compound by using this equation: % element = (number of atoms)(atomic weight) (FW of the compound) x 100
19 Percent Composition So the percentage of carbon and hydrogen in ethane (C 2 H 6, molecular mass = 30.0) is: %C = (2)(12.0 amu) (30.0 amu) 24.0 amu = x 100 = 80.0% 30.0 amu %H = (6)(1.01 amu) (30.0 amu) 6.06 amu = x 100 = 20.0% 30.0 amu
21 Atomic mass unit and the mole amu definition: 12 C = 12 amu. The atomic mass unit is defined this way. 1 amu = x g How many 12 C atoms weigh 12 g? 6.02x C weigh 12 g. Avogadro s number The mole
22 Atomic mass unit and the mole amu definition: 12 C = 12 amu. 1 amu = x g How many 12 C atoms weigh 12 g? 6.02x C weigh 12 g. Avogadro s number The mole #atoms = (1 atom/12 amu)(1 amu/1.66x10-24 g)(12g) = 6.02x C weigh 12 g
23 Therefore: Any 6.02 x mole of 12 C has a mass of 12 g
24 The mole The mole is just a number of things 1 dozen = 12 things 1 pair = 2 things 1 mole = x10 23 things
25 Molar Mass The trick: By definition, this is the mass of 1 mol of a substance (i.e., g/mol) The molar mass of an element is the mass number for the element that we find on the periodic table The formula weight (in amu s) will be the same number as the molar mass (in g/mol)
26 Using Moles Moles provide a bridge from the molecular scale to the real-world scale The number of moles correspond to the number of molecules. 1 mole of any substance has the same number of molecules.
27 Mole Relationships One mole of atoms, ions, or molecules contains Avogadro s number of those particles One mole of molecules or formula units contains Avogadro s number times the number of atoms or ions of each element in the compound
28 Finding Empirical Formulas
29 Combustion Analysis gives % composition C n H n O n + O 2 nco 2 + 1/2nH 2 O Compounds containing C, H and O are routinely analyzed through combustion in a chamber like this %C is determined from the mass of CO 2 produced %H is determined from the mass of H 2 O produced %O is determined by difference after the C and H have been determined
30 Calculating Empirical Formulas One can calculate the empirical formula from the percent composition
31 Calculating Empirical Formulas The compound para-aminobenzoic acid (you may have seen it listed as PABA on your bottle of sunscreen) is composed of carbon (61.31%), hydrogen (5.14%), nitrogen (10.21%), and oxygen (23.33%). Find the empirical formula of PABA.
32 Calculating Empirical Formulas Assuming g of para-aminobenzoic acid, C: g x 1 mol = mol C g H: 5.14 g x 1 mol 1.01 g = 5.09 mol H N: g x 1 mol g = mol N O: 1 mol g x g = mol O
33 Calculating Empirical Formulas Calculate the mole ratio by dividing by the smallest number of moles: mol C: = mol 5.09 mol H: = mol mol N: = mol mol O: = mol
34 Calculating Empirical Formulas These are the subscripts for the empirical formula: C 7 H 7 NO 2 O H 2 N O -
35 Elemental Analyses Compounds containing other elements are analyzed using methods analogous to those used for C, H and O
36 Stoichiometric Calculations The coefficients in the balanced equation give the ratio of moles of reactants and products
37 Stoichiometric Calculations From the mass of Substance A you can use the ratio of the coefficients of A and B to calculate the mass of Substance B formed (if it s a product) or used (if it s a reactant)
38 Stoichiometric Calculations Example: 10 grams of glucose (C 6 H 12 O 6 ) react in a combustion reaction. How many grams of each product are produced? C 6 H 12 O 6(s) + 6 O 2(g) 6 CO 2(g) + 6 H 2 O (l) 10.g? +? Starting with 10. g of C 6 H 12 O 6 we calculate the moles of C 6 H 12 O 6 use the coefficients to find the moles of H 2 O & CO 2 and then turn the moles to grams
39 Stoichiometric calculations C 6 H 12 O 6 + 6O 2 6CO 2 + 6H 2 O 10.g? +? MW: 180g/mol 44 g/mol 18g/mol #mol: 10.g(1mol/180g) mol 6(.055) 6(.055mol) 6(.055mol)44g/mol #grams: 15g 5.9 g 6(.055mol)18g/mol
40 Limiting Reactants
41 How Many Cookies Can I Make? You can make cookies until you run out of one of the ingredients Once you run out of sugar, you will stop making cookies
42 How Many Cookies Can I Make? In this example the sugar would be the limiting reactant, because it will limit the amount of cookies you can make
43 Limiting Reactants The limiting reactant is the reactant present in the smallest stoichiometric amount 2H 2 + O > 2H 2 O #moles Left:
44 Limiting Reactants In the example below, the O 2 would be the excess reagent
45 Limiting reagent, example: Soda fizz comes from sodium bicarbonate and citric acid (H 3 C 6 H 5 O 7 ) reacting to make carbon dioxide, sodium citrate (Na 3 C 6 H 5 O 7 ) and water. If 1.0 g of sodium bicarbonate and 1.0g citric acid are reacted, which is limiting? How much carbon dioxide is produced? 3NaHCO 3(aq) + H 3 C 6 H 5 O 7(aq) > 3CO 2 (g) + 3H 2 O(l) + Na 3 C 6 H 5 O 7 (aq) 1.0g 1.0g 84g/mol 192g/mol 44g/mol 1.0g(1mol/84g) 1.0(1mol/192g) mol mol (if citrate limiting) (3)= mol So bicarbonate limiting: mol 0.012(1/3)=.0040mol moles CO 2 44g/mol(0.012mol)=0.53g CO =.0012mol left mol(192 g/mol)= g left.
46 Theoretical Yield The theoretical yield is the amount of product that can be made In other words it s the amount of product possible from stoichiometry. The perfect reaction. This is different from the actual yield, the amount one actually produces and measures
47 Percent Yield A comparison of the amount actually obtained to the amount it was possible to make Actual Yield Percent Yield = x 100 Theoretical Yield
48 Example Benzene (C 6 H 6 ) reacts with Bromine to produce bromobenzene (C 6 H 6 Br) and hydrobromic acid. If 30. g of benzene reacts with 65 g of bromine and produces 56.7 g of bromobenzene, what is the percent yield of the reaction? C 6 H 6 + Br > C 6 H 5 Br + HBr 30.g 65 g 56.7 g 78g/mol 160.g/mol 157g/mol 30.g(1mol/78g) 65g(1mol/160g) 0.38 mol 0.41 mol (If Br 2 limiting) 0.41 mol 0.41 mol (If C 6 H 6 limiting) 0.38 mol 0.38 mol 0.38mol(157g/1mol) = 60.g 56.7g/60.g(100)=94.5%=95%
49 Example, one more React 1.5 g of NH 3 with 2.75 g of O 2. How much NO and H 2 O is produced? What is left? 4NH 3 + 5O > 4NO + 6H 2 O 1.5g 2.75g?? 17g/mol 32g/mol 30.g/mol 18g/mol 1.5g(1mol/17g)= 2.75g(1mol/32g)=.088mol.086 (If NH 3 limiting):.088mol.088(5/4)=.11 O 2 limiting:.086(4/5)=.086 mol.086 mol(4/5)=.086(6/5)=.069mol.069 mol.10mol.069mol(17g/mol).069mol(30.g/mol).10mol(18g/mol) 1.2g 2.75g 2.1 g 1.8g
51 Gun powder reaction 10KNO 3(s) + 3S (s) + 8C (s) K 2 CO 3(s) + 3K 2 SO 4(s) + 6CO 2(g) + 5N 2(g) Salt peter sulfur charcoal And heat. What is interesting about this reaction? What kind of reaction is it? What do you think makes it so powerful?
52 Gun powder reaction Oxidizing agent Oxidizing agent Reducing agent 10KNO 3(s) + 3S (s) + 8C (s) K 2 CO 3(s) + 3K 2 SO 4(s) + 6CO 2(g) + 5N 2(g) Salt peter sulfur charcoal And heat. What is interesting about this reaction? Lots of energy, no oxygen What kind of reaction is it? Oxidation reduction What do you think makes it so powerful and explosive? Makes a lot of gas!!!!
Chapter 3: Stoichiometry Key Skills: Balance chemical equations Predict the products of simple combination, decomposition, and combustion reactions. Calculate formula weights Convert grams to moles and
Lecture 5, The Mole What is a mole? Moles Atomic mass unit and the mole amu definition: 12 C = 12 amu. The atomic mass unit is defined this way. 1 amu = 1.6605 x 10-24 g How many 12 C atoms weigh 12 g?
Chapter 3 Chemical Reactions and Reaction Stoichiometry 許富銀 ( Hsu Fu-Yin) 1 Stoichiometry The study of the numerical relationship between chemical quantities in a chemical reaction is called stoichiometry.
The Mole Concept Ron Robertson r2 c:\files\courses\1110-20\2010 final slides for web\mole concept.docx The Mole The mole is a unit of measurement equal to 6.022 x 10 23 things (to 4 sf) just like there
Chapter 3: Stoichiometry Goal is to understand and become proficient at working with: 1. Chemical equations (Balancing REVIEW) 2. Some simple patterns of reactivity 3. Formula weights (REVIEW) 4. Avogadro's
MOLES AND CALCULATIONS USING THE MOLE CONCEPT INTRODUCTORY TERMS A. What is an amu? 1.66 x 10-24 g B. We need a conversion to the macroscopic world. 1. How many hydrogen atoms are in 1.00 g of hydrogen?
Chapter 3 Insert picture from First page of chapter Stoichiometry: Ratios of Combination Copyright McGraw-Hill 2009 1 3.1 Molecular and Formula Masses Molecular mass - (molecular weight) The mass in amu
Chem 1100 Chapter Three Study Guide Answers Outline I. Molar Mass and Moles A. Calculations of Molar Masses B. Calculations of moles C. Calculations of number of atoms from moles/molar masses 1. Avagadro
CHAPTER THREE: CALCULATIONS WITH CHEMICAL FORMULAS AND EQUATIONS Part One: Mass and Moles of Substance A. Molecular Mass and Formula Mass. (Section 3.1) 1. Just as we can talk about mass of one atom of
1 Introduction to Chemistry Atomic Weights (Definitions) Chemical Calculations: The Mole Concept and Chemical Formulas AW Atomic weight (mass of the atom of an element) was determined by relative weights.
TEKS REVIEW 8B Calculating Atoms, Ions, or Molecules Using Moles TEKS 8B READINESS Use the mole concept to calculate the number of atoms, ions, or molecules in a sample TEKS_TXT of material. Vocabulary
Stoichiometry Atomic Mass (atomic weight) Atoms are so small, it is difficult to discuss how much they weigh in grams We use atomic mass units an atomic mass unit (AMU) is one twelfth the mass of the catbon-12
Moles Balanced chemical equations Molar ratios Mass Composition Empirical and Molecular Mass Predicting Quantities Equations Micro World atoms & molecules Macro World grams Atomic mass is the mass of an
AP Chemistry Unit #3 Chapter 3 Zumdahl Stoichiometry C6H12O6 + 6 O2 6 CO2 + 6 H2O Students should be able to: Calculate the atomic weight (average atomic mass) of an element from the relative abundances
Chapter 3 Stoichiometry Mole - Mass Relationships in Chemical Systems 3.1 Atomic Masses 3.2 The Mole 3.3 Molar Mass 3.4 Percent Composition of Compounds 3.5 Determining the Formula of a Compound 3.6 Chemical
Chapter 3 Formulas, Equations and Moles Interpreting Chemical Equations You can interpret a balanced chemical equation in many ways. On a microscopic level, two molecules of H 2 react with one molecule
Calculations and Chemical Equations Atomic mass: Mass of an atom of an element, expressed in atomic mass units Atomic mass unit (amu): 1.661 x 10-24 g Atomic weight: Average mass of all isotopes of a given
Chapter 7: Stoichiometry - Mass Relations in Chemical Reactions How do we balance chemical equations? How can we used balanced chemical equations to relate the quantities of substances consumed and produced
Chapter Three Calculations with Chemical Formulas and Equations Mass and Moles of a Substance Chemistry requires a method for determining the numbers of molecules in a given mass of a substance. This allows
PERIODIC TABLE OF ELEMENTS 4/23/14 Chapter 7: Chemical Reactions 1 CHAPTER 7: CHEMICAL REACTIONS 7.1 Describing Reactions 7.2 Types of Reactions 7.3 Energy Changes in Reactions 7.4 Reaction Rates 7.5 Equilibrium
CHAPTER 3 Calculations with Chemical Formulas and Equations MOLECULAR WEIGHT (M. W.) Sum of the Atomic Weights of all atoms in a MOLECULE of a substance. FORMULA WEIGHT (F. W.) Sum of the atomic Weights
Quantities of Reactants and Products CHAPTER 3 Chemical Reactions Stoichiometry Application of The Law of Conservation of Matter Chemical book-keeping Chemical Equations Chemical equations: Describe proportions
Chapter 3: Stoichiometry Mole - Mass Relationships in Chemical Systems 3.1 The Mole 3.2 Determining the Formula of an Unknown Compound 3.3 Writing and Balancing Chemical Equations 3.4 Calculating the Amounts
Chem. I Notes Ch. 12, part 2 Using Moles NOTE: Vocabulary terms are in boldface and underlined. Supporting details are in italics. 1 MOLE = 6.02 x 10 23 representative particles (representative particles
Chapter 1 The Atomic Nature of Matter: Selected Answersc for Practice Exam. MULTIPLE CHOICE 50. 5.80 g of dioxane (C 4 H 8 O 2 ) is how many moles of dioxane? 0.0658 mol 0.0707 mol 0.0725 mol d. 0.0804
Stoichiometry Chemistry 1010 Review Tutorial Stoichiometry and Lewis Structures April 9 th, 2013 Stoichiometry Stoichiometry involves MOLES Elements/compounds can only be compared side by side using moles
The Mole Atomic mass units and atoms are not convenient units to work with. The concept of the mole was invented. This was the number of atoms of carbon-12 that were needed to make 12 g of carbon. 1 mole
Sample Exercise 3.1 Interpreting and Balancing Chemical Equations The following diagram represents a chemical reaction in which the red spheres are oxygen atoms and the blue spheres are nitrogen atoms.
1. When the following equation is balanced, the coefficient of Al is. Al (s) + H 2 O (l)? Al(OH) (s) + H 2 (g) A) 1 B) 2 C) 4 D) 5 E) Al (s) + H 2 O (l)? Al(OH) (s) + H 2 (g) Al (s) + H 2 O (l)? Al(OH)
Chemical Calculations: Formula Masses, Moles, and Chemical Equations Atomic Mass & Formula Mass Recall from Chapter Three that the average mass of an atom of a given element can be found on the periodic
Chapter 4 Chemical Equations & Stoichiometry Chemical reactions are best described using equations which tells us what compounds we started with (reactants), what we did to them (reaction conditions) and
Chapter 3: Stoichiometry Goal is to understand and become proficient at working with: 1. Avogadro's Number, molar mass and converting between mass and moles (REVIEW). 2. empirical formulas from analysis.
Chem 31 Fall 2002 Chapter 3 Stoichiometry: Calculations with Chemical Formulas and Equations Writing and Balancing Chemical Equations 1. Write Equation in Words -you cannot write an equation unless you
Chapter 3 Calculation with Chemical Formulas and Equations Practical Applications of Chemistry Determining chemical formula of a substance Predicting the amount of substances consumed during a reaction
How much does a single atom weigh? Different elements weigh different amounts related to what makes them unique. What units do we use to define the weight of an atom? amu units of atomic weight. (atomic
Molar Mass Molar mass = Mass in grams of one mole of any element, numerically equal to its atomic weight Molar mass of molecules can be determined from the chemical formula and molar masses of elements
Honors Chemistry: Unit 6 Test Stoichiometry PRACTICE TEST ANSWER KEY Page 1 1. 2. 3. 4. 5. 6. Question What is a symbolic representation of a chemical reaction? What 3 things (values) is a mole of a chemical
Mole Calculations Chemical Equations and Stoichiometry Lecture Topics Atomic weight, Mole, Molecular Mass, Derivation of Formulas, Percent Composition Chemical Equations and Problems Based on Miscellaneous
Chapter 10 Chemical Quantities 101 The Mole: A Measurement 102 Mole-Mass and Mole-Volume Relationships 103 Percent Composition and Chemical Formulas 1 Copyright Pearson Education, Inc, or its affiliates
Chemistry I: Using Chemical Formulas Formula Mass The sum of the average atomic masses of all elements in the compound. Units are amu. Molar Mass - The mass in grams of 1 mole of a substance. Substance
Chapter 4: The Mole Atomic mass provides a means to count atoms by measuring the mass of a sample The periodic table on the inside cover of the text gives atomic masses of the elements The mass of an atom
Micro World atoms & molecules Laboratory scale measurements Atomic mass is the mass of an atom in atomic mass units (amu) By definition: 1 atom 12 C weighs 12 amu On this scale 1 H = 1.008 amu 16 O = 16.00
Stoichiometry Name Warm-Ups (Show your work for credit) Date 1. Date 2. Date 3. Date 4. Date 5. Date 6. Date 7. Date 8. Stoichiometry 2 Study Guide: Things You Must Know Vocabulary (know the definition
The Mole, Avogadro s Number, and Molar Mass Example: How many atoms are present in 2.0 kg of silver? (1 amu = 1.6605402x10-24 g) Example: How many molecules are present in 10. mg of smelling salts, (NH
SCH 4C1 Unit 2 Problem Set Questions taken from Frank Mustoe et all, "Chemistry 11", McGraw-Hill Ryerson, 2001 1. A small pin contains 0.0178 mol of iron. How many atoms of iron are in the pin? 2. A sample
Chapter 1 The Atomic Nature of Matter 6. Substances that cannot be decomposed into two or more simpler substances by chemical means are called a. pure substances. b. compounds. c. molecules. d. elements.
Chemistry B11 Chapter 4 Chemical reactions Chemical reactions are classified into five groups: A + B AB Synthesis reactions (Combination) H + O H O AB A + B Decomposition reactions (Analysis) NaCl Na +Cl
Molecular Mass and Formula Mass for molecular compounds: the molecular mass is the mass (in amu) of one molecule of the compound molecular mass = atomic masses of elements present Chapter 3: ex. P2O5 molecular
Ch. 10 The Mole I. Molar Conversions I II III IV A. What is the Mole? A counting number (like a dozen) Avogadro s number (N A ) 1 mole = 6.022 10 23 representative particles B. Mole/Particle Conversions
The Fundamentals and Stoichiometry Recitation Worksheet Week of 25 August 2008. Fundamental Principles and Terminology Avogadro s Number: Used to represent the amount of a given atom as a basis for comparison
Chapter 3 (Hill/Petrucci/McCreary/Perry Stoichiometry: Chemical Calculations This chapter deals with quantitative relationships in compounds and between compounds in chemical reactions. These quantitative
Teacher: Mr. gerraputa Print Close Name: 1. A chemical formula is an expression used to represent 1. mixtures, only 3. compounds, only 2. elements, only 4. compounds and elements 2. What is the total number
Chapter 3 Mass Relationships in Chemical Reactions Student: 1. An atom of bromine has a mass about four times greater than that of an atom of neon. Which choice makes the correct comparison of the relative
Chemical Equations & Stoichiometry Chapter Goals Balance equations for simple chemical reactions. Perform stoichiometry calculations using balanced chemical equations. Understand the meaning of the term
Chapter 7: Stoichiometry - Mass Relations in Chemical Reactions How do we balance chemical equations? How can we used balanced chemical equations to relate the quantities of substances consumed and produced
score /10 pts. Name Class Date 7.1 Stoichiometry and Percent Yield Mole Ratios An example: The combustion of propane is used to heat many rural homes in winter. Balance the equation below for the combustion
1 CHEMICAL REACTINS Example: Hydrogen + xygen Water H + H + + - Note there is not enough hydrogen to react with oxygen - It is necessary to balance equation. reactants products + H + H (balanced equation)
Chapter 7 Part II: Chemical Formulas and Equations Mr. Chumbley Chemistry 1-2 SECTION 3: USING CHEMICAL FORMULAS Molecules and Formula Unit We have not yet discussed the different ways in which chemical
PERCENTAGE COMPOSITION STOICHOMETRY UNIT 3 1. A sample of magnesium weighing 2.246 g burns in oxygen to form 3.724 g of magnesium oxide. What are the percentages of magnesium and oxygen in magnesium oxide?
Name Date Class 1 STOICHIOMETRY SECTION 1.1 THE ARITHMETIC OF EQUATIONS (pages 353 358) This section explains how to calculate the amount of reactants required or product formed in a nonchemical process.
STOICHIOMETRY STOICHIOMETRY The analysis of the quantities of substances in a chemical reaction. Stoichiometric calculations depend on the MOLE- MOLE relationships of substances. Measurements in Chemical
MOLE CONVERSION PROBLEMS 1. What is the molar mass of MgO? [40.31 g/mol] 2. How many moles are present in 34 grams of Cu(OH) 2? [0.35 moles] 3. How many moles are present in 2.5 x 10 23 molecules of CH
The Mole 6.022 x 10 23 Background: atomic masses Look at the atomic masses on the periodic table. What do these represent? E.g. the atomic mass of Carbon is 12.01 (atomic # is 6) We know there are 6 protons
Formula Stoichiometry Text pages 237-250 Formula Mass Review Write a chemical formula for the compound. H 2 CO 3 Look up the average atomic mass for each of the elements. H = 1.008 C= 12.01 O = 16.00 Multiply
Mass, Moles, & Molar Mass Relative quantities of isotopes in a natural occurring element (%) E.g. Carbon has 2 isotopes C-12 and C-13. Of Carbon s two isotopes, there is 98.9% C-12 and 11.1% C-13. Find
Daily Practice Review 2/28-29/08 1. Why is it not correct to balance an equation by changing the subscripts in one or more of the formulas? If you change the subscripts in a formula you change the chemical
Exercise #1 Atomic Masses 1. The average mass of pennies minted after 1982 is 2.50 g and the average mass of pennies minted before 1982 is 3.00 g. In a sample that contains 90.0% new and 10.0% old pennies,
Measuring Atomic Masses Mass Spectrometer used to isolate isotopes of an element and determine their mass. 1 An element sample is heated to vaporize it and the gaseous atoms are zapped with an electron
Chapter 3 Stoichiometry of Formulas and Equations Chapter 3 Outline: Mole - Mass Relationships in Chemical Systems 3.1 The Mole 3.2 Determining the Formula of an Unknown Compound 3.3 Writing and Balancing
Chapter 3 Mass Relationships in Chemical Reactions 國防醫學院生化學科王明芳老師 2011-9-20 1 Balancing Chemical Equations A balanced chemical equation shows that the law of conservation of mass is adhered to. In a balanced
Chapter 1: Moles and equations 1 Learning outcomes you should be able to: define and use the terms: relative atomic mass, isotopic mass and formula mass based on the 12 C scale perform calculations, including
Outline 6.1 The Mole and Avogadro s Number 6.2 Gram Mole Conversions 6.3 Mole Relationships and Chemical Equations 6.4 Mass Relationships and Chemical Equations 6.5 Limiting Reagent and Percent Yield Goals
1/7/013 Chemistry: Atoms First Julia Burdge & Jason Overby 8 Stoichiometry: Ratios of Combination Chapter 8 Chemical Reactions Kent L. McCorkle Cosumnes River College Sacramento, CA Copyright (c) The McGraw-Hill
Chemical Quantities: The Mole Name Warm-Ups (Show your work for credit) Date 1. Date 2. Date 3. Date 4. Date 5. Date 6. Date 7. Date 8. Chemical Quantities: The Mole 2 Study Guide: Things You Must Know
Chemical Equations and Calculations A chemical equation is a shorthand way of indicating what is going on in a chemical reaction. We could do it the long way Two molecules of Hydrogen gas react with one
21 st Century Chemistry Multiple Choice Question in Topic 3 Metals Unit 11 1. Consider the equation: 2Ca(s) + O 2 (g) 2CaO(s) Which of the following statements are correct? (1) Calcium and oxygen are reactants. |
Known as the first surviving set of laws of human civilization, dating back to the end of 3rd Millennium BCE, the code of Ur-Nammu is, arguably, credited to the Sumerian king Ur-Nammu of Ur (2112–2095 BCE). Unlike code of Urukagina, there are some 40 laws remained intact from the code of Ur-Nammu. However, there is a strong suspicion among archaeologists since the original laws were inscribed in a stone stele which is not found (yet!) and only the clay “copies” from later centuries are the source for the code (see sample below). However, the cuneiform are rich in detail and contain the first legal code. Kramer (1971), in his summary, points that the laws are, arguably, the first occurrence of eye-for-eye and tooth-for-tooth litigation. There is more than just resemblance to the later social orders. Notable Assyriologist A. R. George (2003), in a critically rich analysis, finds no discrepancies in the historical evidence that Sumerian practices both shaped and are at the root of Judaic, Mandaean Gnostic, Christian and Islamic belief systems.Valek summarizes it as follows,
The Code of Ur-Nammu divided society into two classes: free people and slaves. Slaves usually worked as servants but also as craftsmen. They were owned by their masters, but their legal status was relatively free. They could give evidence in court, get married and own possessions. The code also dealt with the punishment of perpetrators of bodily harm and sexual offences, and regulated soldiers’ relationships with first and second wives. The years in which Ur-Nammu created his code are therefore called “Year in which Ur-Nammu the king put in order the ways from below to above”, and “Year Ur-Nammu made justice in the land”.
The structure of the provisions is simplistic in terms of causation i.e., if this then that. This simplistic style dominated the later succeeding Babylonian as well as Abrahamic codes. The code of Ur-Nammu addresses legal matters in context of legal murder, sex, assault, criminal, marital, accusation, and estate. Some examples include,
If a man commits a murder, that man must be killed.
If a man divorces his first-time wife, he shall pay her one mina of silver.
If a man appears as a witness, but withdraws his oath, he must make payment, to the extent of the value in litigation of the case.
The moral aspect of the code, albeit interesting, and requires a discussion of its own; this is not the point of interest here. What is of significance in this, otherwise little known code, is the emergence, mutation and persistence of practices which directly stems out of preceding practices. That is, instead of seeing it as the first know code, we must understand it as an emergent temporal phenomenon. Every thread of practice in the social fabric, in other words, is interwoven with an much older thread. Underneath every practice, there lies another one. By pulling a thread, in order to seek the origin, is equal to rip apart the whole tapestry. Instead, in order to make sense of it all, the question, therefore, is not to seek what comes first; or, who makes who; the fabric or the thread? It is to interpret and understand one given thread where it belongs in the larger whole. |
Measuring Cloud Heights: The Micropulse Lidar (MPL)
Students use software to collect and sort data from the previous month reflecting cloud heights. They answer questions on the data and discuss with the class. They examine how meteorologists use this information.
6th - 8th Science 3 Views 1 Download
Get the Picture - Severe Weather Graphs and Other Visual Representations
Whereas the lesson is an analysis of weather-related data, it can be used in any science class to teach how to review data, graphs, and visual models for pertinent information, and how sometimes these representations help to clarify...
5th - 9th Science
Using Radiosonde Data From a Weather Balloon Launch
Students interpret radiosonde data from a weather balloon launch to distinguish the characteristics of the lower atmosphere. After a brief discussion of the various layers of the atmosphere, students analyze the way pressure and...
7th - 12th Science
Interpreting Data, Facts and Ideas from Informational Texts - A Different King of Fuel
Why do we need renewable energy resources? Discuss the energy crisis with your middle and high school classes through interpreting informational texts. An engaging video and a worksheet accompany this plan.
8th - 10th Science CCSS: Adaptable
Fun with Air-Powered Pneumatics
How high did the ball go? Engineering teams build a working pneumatic system that launches a ball into the air. The teams vary the amount of pressure and determine the accompanying height of the ball. An extension of building a device to...
7th - 12th Science CCSS: Designed |
Authored by: James W. Brown
RELATIVE HUMIDITY is a measure of the amount of water in the air. Relative Humidity is measured on a relative scale rather than a linear scale (like measurements of temperature and distance) for example. Although this may make Relative Humidity a little harder to understand, its role in plant health is extremely important. It is possible to make the Relative Humidity a little more friendly to the plants through the use of equipment for greenhouse humidity control.
UNDERSTANDING RELATIVE HUMIDITY
The potential water-holding capacity of air greatly increases as the temperature in the air increases. For example, air at 60ºF (~16ºC) can hold over five times as much water as the same air at 20ºF (~ –7ºC). So, as the warm days of spring or fall cool at night, the air cools to where it reaches the point of saturation—what is called the dew point—and water or frost settles out on automobiles, grass, and the rooftops of our houses. In parts of the world where there is higher relative humidity, this is a common occurrence when there are marked temperature differences between day and night. Many less humid parts of the world have infrequent dew formation, even though there are considerable temperature differences between day and night. The air holds so little water that it does not reach saturation even at the lower nighttime temperatures.
HOW DOES RELATIVE HUMIDITY AFFECTS PLANTS?
Plants not only contain a large proportion of water, they move large volumes of water through their tissues. Although water is used in photosynthesis, most of the water taken in by a plant is used in transpiration. That is, the water is taken in by the roots and evaporated through the leaves into the air. This process cools the plant. The relative humidity in the air can affect the flow of water through the plant: the higher the relative humidity, the more slowly transpiration occurs. If environmental changes that affect the transpiration rate are rapid enough, plant tissue damage can occur.
MEASURING RELATIVE HUMIDITY
Before anything can be done to alter the relative humidity in the greenhouse, it needs to be measured, and the measurement must be entered into the environmental control system of the greenhouse. Many hobby greenhouses don’t have sophisticated enough environmental control systems to measure relative humidity and operate equipment to change it on a schedule that provides adequate control. Although you may be in that situation now, an understanding of what can be done will enable you to do some things now and incorporate additional equipment capabilities in the future. You may also gain an understanding of the reason for some of the plant damage you see in your greenhouse.
NORMAL PLANT OPERATION IN THE GREENHOUSE
As plants grow, they take up water and fertilizer ingredients through the roots and send them up to the rest of the plant. The water is either used in photosynthesis or given off in transpiration. This process occurs over a fairly wide range of temperatures and relative humidity conditions. Tomato plants will operate without damage with relative humidity ranging from 45 per cent up to about 100 per cent. Lower relative humidities can stress the plants by allowing them to spend excessive energy pumping water through their tissue into the air.
Rapid changes in the relative humidity can severely stress a plant. A relative humidity increase or decrease of as little as 20 per cent in a few minutes can cause tissue damage because the plant cannot adapt quickly enough. Rapid decreases in relative humidity can be brought about by suddenly bringing in large volumes of dry outside air for greenhouse cooling purposes.
A drop in greenhouse temperature due to nightfall or sudden cloud cover can quickly bring about an increase in relative humidity. If the plant has been rapidly taking up water, it will continue doing so because any adjustment in plant water uptake occurs slowly. The water taken up after the rise in relative humidity cannot be given off as freely into the air through the leaves and instead may be stuffed into fruit or foliage to an extent that it does cell damage. Some examples of such cell damage will be described later and are shown in some of the accompanying pictures.
Tomato plants will wilt at the top when the sun comes out brightly after three or more days of cloudy weather. During the cloudy days, the plants have slowed the rate at which they take up the water through the roots. On the sunny day, the water needs of the plant are suddenly greater because the temperatures are likely higher and the relative humidity in the surrounding air is often lower. It is easier for the plant to give off water into the air because of the lower relative humidity; due to the higher temperature, the plant needs to give off more water to keep itself cool. The grower needs to help the plant through this adjustment period by both increasing the relative humidity in the greenhouse and making it easier for the plant to take up water by lowering the fertilizer content in the solution being fed. If sufficient adjustments are not made soon enough, plant tissue damage may occur.
Lettuce crops can also experience rapid increases and decreases in relative humidity because of temperature changes and cooling system air exchanges in the greenhouse. Either the rapid increase or the rapid decrease in relative humidity can cause leaf tissue damage in lettuce plants.
TISSUE DAMAGE EXAMPLES
Blossom end rot of tomato fruit occurs when the young, still-expanding cells in the blossom end of the fruit are either overstuffed with water because of excess water in the plant or are collapsed because too much water has been taken out of the fruit by the plant as it slowly adapts to changing environmental conditions. The developing fruit acts like a shock absorber for water conditions in the plant. When the fruit’s limited tolerance capabilities are exceeded, cell damage occurs.
The only cell division that takes place in the developing tomato fruit is in the seeds. All the other cells of the fruit are already present; they simply enlarge and mature. The cell growth process takes place from the stem out toward the blossom end of the fruit. The final stage of cell maturation is the building of a calcium-based cell wall. Mature plant cells have the fortification of that cell wall. Young, still-expanding cells in the blossom end of the fruit may not yet have been fortified with the calcium-containing cell wall.
When environmental conditions cause extra water to be sent to the developing fruit, the mature cells do not accept it because of the fortified cell wall. Less mature cells at the blossom end can accept enough water to over-expand and burst. Conversely, if environmental conditions cause water to be drawn out of the fruit, the mature, fortified cells do not give up their water while the young, expanding cells give up water to the point of cell collapse. The burst or collapsed dead cells show up as a brown or black patch on the blossom end of the fruit. It usually takes 10 to 14 days after the damage occurs for the blossom end rot symptoms to be visible on the fruit.
Tomato leaf cells are all formed by the time the leaf is visible. Small tomato leaves grow by cell expansion, not by cell division. Each tomato leaflet generally expands starting from the base, then out to the mid rib, and then to the end of the leaflet. Secondary expansion also occurs through secondary veins toward the perimeter of the leaflet. The leaf cells closest to the outside edge of the leaflet are the last ones to fully expand and be fortified with a cell wall. They, therefore, are the cells that remain subject to bursting or collapsing for the longest period as the leaf goes through its growth process.
If a water shortage occurs in the leaf because of a sudden drop in relative humidity (or another cause), the outer cells of the leaf and the cells toward the tip of the leaflet are the most likely to run short of water. If the water shortage lasts long enough, water will be extracted from some of the immature cells in areas of the leaf. This can be severe enough to collapse and kill the cells.
When the water supply is reinstated, the remaining living cells continue their growth and expansion process. Because the dead cells do not expand and fill in the area they were to have occupied, there is often tissue distortion around the patch of dead tissue. The living cells expand but the overall leaf expansion does not occur because of the death of the collapsed cells. When this occurs, the damage usually takes place at the outer edge of the tomato leaflets farthest from the plant stem.
CUCUMBER LEAF EXPANSION AND POSSIBLE CELL DAMAGE
All the cells of a cucumber leaf are formed while it is still very small. Because the cucumber forms almost a big half circle, the leaf expansion progresses a little differently than in the tomato leaf. It progresses out along several veins that radiate out from where the leaf blade connects to the leaf petiole. There is a slight secondary expansion out from each main leaf vein toward the adjacent leaf vein.
The more common cell death pattern in cucumber leaves is the death of the cells around the perimeter of the leaf due to the lack of water getting to them and the resulting collapse of the outer few cells of the leaf. This usually involves only a few layers of cells around the outside of the cucumber leaf. In severe situations, a quarter of an inch or more layer of tissue around the leaf margin can be involved.
Severe leaf water shortages that occur in a relatively short period of time in leaves that are still expanding can cause the death of rather large, inverted V-shaped patches of cells between the major veins of the cucumber leaf. This pattern is most likely to occur because there are large numbers of cells in the areas toward the leaf margin that have not yet fully expanded and built cell walls. The uninjured tissue behind the dead cells will continue to grow and have a puckered appearance next to the dead tissue. This can be seen in leaves of the accompanying picture of the top part of a cucumber plant.
In the photo, fruit toward the top of the cucumber plant has aborted. They are drying up from the blossom end toward the stem end of the fruit. While water shortage played a role in the abortion of these fruit, not all aborted cucumber fruit can be explained by a water shortage in the plant. Other factors such as the fruit load on the plant are also possible causes.
LETTUCE TISSUE DAMAGE
Since the leaves of the lettuce plant are the part of the plant that we eat, we don’t want them to be too tough to chew. Lettuce leaves are fairly delicate and therefore can be damaged relatively easily by sudden changes in water availability and the relative humidity in the air.
When something happens to interrupt or slow the flow of water through the lettuce leaves, the cells at the outer margin of the leaf are the ones that get shorted first and most severely. As noted earlier in this article, a sudden drop in the relative humidity in the air in the greenhouse is one of the things that can lead to a shortage of water within the plant. If they collapse and die, there will be a layer of dead cells around the margin of the leaf. The cells toward the middle of the leaf survive and continue to grow. The resulting tissue is puckered and bubbled because its expansion is restricted by the surrounding dead cells.
The Bibb lettuce picture with the puckered leaves shows the type of damage that can occur from a sudden lack of water in the plant. Notice that the outer leaves of the plant are not affected. The cells in those leaves were fully expanded and mature when the stress on the plant took place. The cells in the margins of the younger leaves are the ones that collapsed and died.
When excess water builds up in the lettuce plant, the cells in the leaves at the growing point of the stem can be over-supplied, possibly causing them to burst and die. This is what has happened at the growing point of the Bibb lettuce plant having the open centre. This is called latex tip burn in lettuce. When lettuce plants start to close in over the middle of the plant, excess water is more difficult to dissipate than when the plant is more open. Latex tip burn is, therefore, more likely to occur during the later stages of lettuce growth than during the earlier stages of growth.
Latex tip burn will often take out the whole growing point of the plant. All the cells in the growing point burst, resulting in plant death. The crop is terminated if this happens because normal plant growth will no longer be possible. If the plant is left in the greenhouse to continue to grow, secondary growth will start at the buds at the lower leaves of the plant. The small secondary heads on the plants in the accompanying picture resulted from growth after the terminal bud of the plant was killed by latex tip burn.
Even if the growing point is not completely inactivated by the latex tip burn, the lettuce plant will not grow normally and should be harvested. If there are enough leaves on the plant to be useful, they can be eaten. The rest of the plant should be discarded.
HOW RELATIVE HUMIDITY CAN BE CONTROLLED
The leaf and fruit damage described above can be reduced or eliminated if the rapid changes in the relative humidity in the greenhouse can be slowed enough so the plants can adjust to the changes without tissue damage. An environmental control system capable of tracking and changing the relative humidity in the greenhouse is the ideal solution. Because such a system is relatively expensive for a small greenhouse, many hobby greenhouses have a very limited capability of tracking and modifying the relative humidity in the greenhouse.
Air exchange and heating are generally used to lower the greenhouse relative humidity. An evaporative cooler, a mist system, or a few water sprinklers can be used to increase the moisture in the greenhouse air. The control of the greenhouse relative humidity takes a secondary position to the control of temperature within the greenhouse. If the temperature in the greenhouse is high enough or even too high, heat should not be used to lower the relative humidity in the greenhouse. If the greenhouse is being heated, not much water should be evaporated to increase the greenhouse relative humidity because evaporating water takes additional heat energy. The relative humidity control system must be properly integrated with the heating and cooling systems in the greenhouse to provide the optimum environment for plant growth and development.
If you are not ready to buy a larger, more highly equipped greenhouse at this time, there are a few things you can do to minimize plant damage in the greenhouse due to rapid relative humidity changes. Generally speaking, make efforts to prevent situations that can create rapid changes. For example, on warm, sunny days do not leave the greenhouse closed up until ten o’clock in the morning. This allows excessive heat to build up. Avoid leaving the greenhouse cooling fans on after the temperature has started to drop in the greenhouse, due either to cloud cover or the onset of evening. Also, avoid leaving the wet wall on all night, which could generate excessive moisture in the form of high relative humidity, potentially causing condensation on the plants.
Last, make a point to stay aware of the environment changes in your greenhouse. The more closely in tune you are to temperature and relative humidity changes in it, the better able you will be to make the needed adjustments that will minimize plant damage. |
Earth's tilt and the
worksheet other worksheets
for use with the simulation Four seasons - solstices and equinoxes; single view
from the materialworlds Solar System simulations
© materialworlds.com 2002
The Four seasons - solstices and equinoxes; single view simulation shows the Earth orbiting the Sun.
Compared to the size of the orbit the Sun is magnified 36 times and the Earth 6000 times - not just just make it visible, but also to show which areas of the Earth the Sun illuminates.
The tilt control also helps see where there's daylight - and adds to the 3D appreciation of the Earth's orbit and spin axis.
The "View every: ¼ hour / day / week" control adjusts the time that elapses between each snapshot of the simulation.
"¼ hour" shows the actual rotation of the Earth - missed with the "day" interval that shows the same part of the Earth facing the Sun.
The "step" checkbox briefly pauses the simulation between each snapshot - removing any illusion of continuous movement.
You can estimate the length of day or night for a part of the world at a particular time of year by counting the number of ¼ hour intervals it is in light or darkness (with "step" active). If the region you're interested in lies on or near one of the circles of equal latitude displayed (equator, tropic or arctic/antarctic circles) you could instead estimate the proportion of the circle in darkness or light.
1. As the Earth goes around the Sun, what do you notice about the direction of tilt of the Earth's axis of rotation? Is it fixed or does it move - to point the same way towards the Sun (or in some other way)?
2. Set the "view every" time interval to "day". Pause the simulation at the June solstice, switch the viewing interval to ¼ hour and set the simulation to Play. At the June solstice:
a) which part of the world is in constant daylight?
b) which part of the world is in constant darkness?
c) where is the Sun directly overhead at midday?
d) at the North Pole, what angle is the Sun from the horizon?
e) does anywhere in the world have equal days and nights?
3. As the Earth moves to the September equinox, what happens to:
a) day length North of the equator?
b) day length South of the equator?
c) day length at the equator?
4. At the September equinox:
a) what is true about day length anywhere in the world?
b) At the North and South Poles, where is the Sun in the sky?
5. What happens as the Earth moves from the September equinox to the December solstice?
6. What happens from the December solstice through to the June solstice? |
The third of the four principles of multicellular systems is that much of the communication between cooperating entities (cells, social insects or computers) is indirect and distributed. The entities deposit long-lived cues in external structures -- connective tissue, termite mounds, or databases as the case may be. For example, chess pieces on a chess board structure the actions of chess players who interact with each other by changing the locations of the pieces. The external persistent information helps to organize the collective behavior of the cells, insects, computers or people. The term stigmergy was coined in the 1950’s to put a name to these sorts of reciprocal relationships between social insects and the stigmergy structures that they build, e.g., termite mounds (see photo below), ant hills, beehives and even the pheromone-marked ant trails of nomadic social ant species such as army ants. More recently, the idea has been adopted by other disciplines including computer science.
The term stigmergy is relatively new. But the phenomenon itself is ancient. The cytoskeletons within individual cells are stigmergy structures that help to organize cellular functions. The very bodies of all multicellular organisms are also stigmergy structures. Cells in a multicellular organism create a growing body whose shape and boundaries are defined by a non-living extracellular matrix created by the cells. The cells deposit all sorts of cues, in the form of messenger molecules, in or on this matrix. The shape of the extracellular matrix and the signaling molecules attached to it direct the movement, differentiation, and specialized function of the cells. So it is fair to think of the bodies of animals and plants as stigmergy structures akin to very complex termite mounds.
The extracellular matrix of the earliest multicellular organisms was nothing more than a “slime” excreted by the cells. That slime formed a clump or thin film in which the cells lived and through which messenger molecules diffused from one cell to its neighbors. The more complex extracellular matrix structures of higher order animals and plants support more subtle and complex communication. Plants create rigid stigmergy structures made largely of cellulose. Animals create various sorts of connective tissue that gives structure to their organs and generally holds their bodies together. Mollusks (snails, clams, etc.) create shells, insects create chitinous exoskeletons, vertebrates create bone that is akin to coral (itself a stigmergy structure). Unlike coral, bone is constantly reshaped by the cells that create and maintain it to adapt to the changing stresses it encounters.
Brains, the neuron-based information processing systems of higher
creatures, also depend upon stigmergy in the form of memories laid down
physical changes to neurons and synapses. Memory modifies behavior and
modifies memory...that's the essence of stigmergy.
Computing relies on stigmergy structures in various sorts of memory as well --whether in the form of RAM, ROM, FLASH, disk file-systems or huge databases.
Stigmergy is intimately related to the somewhat slippery notion of "self." Whatever the philosophical niceties, self is clearly about the organism as a whole rather than just a collection of cells that share the same DNA. That is, self refers to a multicellular organism's body which includes both the cells of the organisms and the nonliving extracellular matrix that gives shape and structure to the organism. The extracellular matrix, which is the organism's stigmergy structure, persists for the life of the organism whereas most kinds of cells die and are replaced many times over during the lifespan of the body. In a very real sense, cells are part of the self only insofar as they participate in the self-organizational dance of stigmergy.
Social insects, cooperating cells, cooperating neurons, and cooperating computers communicate both with signals and cues. The distinction is that signals are active communication events in real-time whereas cues are information embedded in the stigmergy structure to be read and reread many times. Both are specialized messages in the sense that they mean different things to different specialized receiving cells (or insects or computers). However cues are further specialized by their location -- that is, in addition to the information intrinsic to their molecular form or digital content, there is also information inherent in their location in the stigmergy structure. In chess terms, a pawn is just a pawn; the important information is which square on the board it occupies. Because cues have both a message content and a location, cues support more complex kinds of communication than do signals and hence tend to support more complex social organizations. For example, complex ant societies rely more on cues whereas simple ant societies rely more on signals.
As is the case with social insects, cells in multicellular organisms communicate both by signals (polymorphic messenger molecules moving indiscriminately through blood, lymph or other liquids) and by cues (polymorphic messenger molecules attached to the extracellular matrix). For example, bone, when stressed, provides cues to osteocytes and other bone cells for its own reshaping to better handle the forces placed upon it. And smooth muscle cells in the walls of blood vessels modulate their contractility according to cues from the extracellular matrix . Not surprisingly, as with social insects, simple multicellular organisms communicate primarily by signals whereas complex multicellular organisms communicate more by cues.
Analogously, computing systems in complex human organization such as businesses rely on records (cues) deposited in databases (stigmergy structures), whereas loose organization, e.g., file-sharing, can work with real-time peer-to-peer messaging. Here again, multicellular computing recapitulates biology. Stigmergy is ever present in complex computing systems and many novel Internet stigmergy structures.
See “Self-organization in social insects.” Bonabeau, E., Theraulaz, G., Deneubourg, J.L., Aron, S. & Camazine, S., Trends in Ecology and Evolution, vol 12, pp. 188-193, 1997.
“Individual versus social complexity, with particular reference to ant colonies,” Anderson, C & McShea, D. W. Biol. Rev., vol 76, pp. 211-237, 2001. p. 228
Extracellular matrix controls myosin light chain phosphorylation and cell contractility through modulation of cell shape and cytoskeletal prestress.” Polte, TR, Eichler, GS, Wang, N, & Ingber, DE. Am J Physiol Cell Physiol 286: C518-C528, 2004.
Contact: sburbeck at mindspring.com
Last revised 6/14/2012 |
William Smith, known to others as “Strata Smith”, is known as the Father of English Geology. He was responsible for initiating the production of a geological map of England and Wales.
Life and Education
Born in March 23, 1769 in Churchill, Oxfordshire, England, William Smith was the son of a mechanic. His father was out of the picture before he turned eight and was left to be raised by his father’s eldest brother, who was a farmer. Because of this, he did not have the privilege of having a steady formal education. This did not hinder his curiosity though, as he continued to explore and collect fossils. His uncle was not pleased with how he went around town carving sundials but later on learned to appreciate him when he also started taking interest in draining land.
He found ways to learn more about geometry, mapping and surveying. His raw knowledge allowed him to train under Edward Webb, a master surveyor. He traveled all over the country as he studied the formation of fossils and rocks and was able to purchase a small estate in the town of Tucking Mill in Midford.
He met several people along the way who helped him in his journey towards becoming one of the greatest figures in geology. He became acquaintances with Rev. Benjamin Richardson who taught him the different names of fossils and shared his knowledge in natural history.
As Edward Webb’s assistant, William Smith traveled all over the country and gained more knowledge on his chosen field. His continuous growth as a surveyor led him to supervise and oversee the digging of the Somerset Canal in 1794. This job was where he first observed the way rocks were formed. He noticed how fossils always seem to be in a specific order from top to bottom not only on sedimentary rocks, but on other sections of rocks as well. This was how the “Principle of Faunal Succession” or “Law of Faunal Succession” came to be. The principle states that there is a constant definite sequence in layers of sedimentary rocks and in other rock formations that contain fossils causing a correlation between these locations.
By 1796, Smith’s knowledge led him to be elected as part of Bath’s agricultural society where he discussed his findings and theories with those who shared his interest in fossils and rocks. He was the first person to draw local geologic maps using fossils as a mapping tool based on their stratigraphic order unlike those who created geologic maps before who merely used the composition of rocks. When his contract ended in 1799, he continued on his attempt to create a complete geologic map of Wales and England along with some parts of Scotland as well. Although progress was very slow due to lack of moral and financial support, the completed map finally went into production in 1812 and was eventually published in 1815. The map comprised fifteen sheets all in all on a five miles to one inch scale. A smaller version was later published in 1819. This paved the way for the creation of the Geological Atlas of England and Wales which was made up of 21 different county geological maps. There was also published information from Rev. Joseph Townsend, rector of Pewsey, who acknowledged Smith as the person responsible for dictating the first ever table of the British Strata to him.
In 1817, he produced an exceptional geological map of the area around Snowdon to London. Sadly, a lot of his works were plagiarized which caused him to go bankrupt and fall into serious debt. He was imprisoned in London’s King’s Bench Prison which was a debtor’s prison. The home and other properties he made investments in were seized as well. He was in and out of jobs until he regained his luck when Sir John Johnstone, an employer of his, helped him take back the credit for a lot of his work and paved the way for him to take back the respect the he truly deserved.
Although production of the map was a remarkable feat, the period’s scientific community did not give their full support right away mostly because they believed that he did not have a good background. They noticed his economic standing and his limited education more than his achievement.
It was not until 1831 that William Smith was finally formally acknowledged as a vital part in the advancement of geology. He was given the first ever Wollaston Medal, an honor presented by the Geological Society of London to those who have shown great contributions to geology. He was also granted an annual life pension of £100. He received an LLD degree during a British Association meeting in Dublin in 1835. He was also among the group of commissioners who were given the privilege of choosing the building stones for the Houses of Parliament in 1838.
William Smith also lived in Scarborough from 1824 to 1826 where he built a geological museum called the Rotunda. The museum focused mainly on the Yorkshire Coast. Lord Oxburgh had it renamed The William Smith Museum of Geology in May of 2008.
William Smith died on August 28, 1839 in Northampton, Northamptonshire, England due to poor health. His remains were buried in St. Peter’s Church where a bust created by Chantrey was placed. The earl of Ducie commissioned for a monument to be constructed in his hometown of Churchill in 1891. John Phillips, his nephew who also trained under him, edited his memoirs which were made public in 1844. Phillips later on became one of the most notable figures in geology and paleonthology during the 19th century because of the stringent training and the wide knowledge that his uncle shared with him.
Today, his achievements continue to be highlighted in many different ways. The Geological Society of London presents an annual lecture in his honor. His work has also been acknowledged as an important factor in the discoveries and works of Charles Darwin. |
In creating their own maps, as well as analyzing a historical map of China, students will identify key elements of a map (scale, kinds of features, symbols, orientation), functions that influence its creation, and how it serves as a resource.
What can maps tell us about how its maker perceives his or her place in the world?
One class period for drawing maps; one class period for discussion, comparisons, and written analysis
Materials and Handouts
11" x 14" sheets of white drawing paper for maps
Paper and pencil for peer analysis
Ming Dynasty map
Neighborhood maps, discussion response, written analysis of classmate’s map
1. Distribute the Map of Imperial Territories and ask the students what maps can tell us about how its maker perceives his or her place in the world?
2. Have students use colored markers to draw a map of their neighborhood.
3. Post neighborhood maps around the room and ask students the following questions in relation to several maps:
- What is at the center of the map?
- Are some things depicted larger than others?
- Which part of the map is depicted in detail?
- Was everything in your neighborhood included in the map?
- How did you decide what should be included?
4. Return to the Map of Imperial Territoriesand discuss the following:
- What is at the center of this map?
- Are countries other than China shown?
- In looking at the map, would one be able to gather much information about countries outside of China?
- What might this say about how the people that made and used this map felt about countries outside of China?
- The Chinese word for China is Zhongguo, meaning “central states” or “middle kingdom.” Does this map convey these meanings? How?
5. Have students write an analysis of one of their classmates’ maps, identifying the kind of information that seems to be valuable to the student who made it. Have students describe how the mapmaker depicted his or her home in relation to the neighborhood. |
Traditional Internet Searches
Traditional search strategies are used everyday by many students and professionals. These strategies yield results that on the surface tend to deal with the idea or concept that was searched. However, many of the resources produced provided basic explanatory information and neglect to present the reader with information that could promote a deeper understanding of the topic, including multiple perspectives and interrelated ideas.
Typical Steps in a Traditional Search:
LICRA Internet Searches
Some of the best ideas for research happen when a reader stumbles on information and allows themselves to consider the idea, make connections between the new knowledge and their own knowledge, and pursue new avenues of information. This seemingly coincidental process can be mimicked and the results tend to portray new connections and a deeper understanding and connection between concepts.
Characteristics of Open Mindset:
Before we get into the specific aspects of LICRA searching, it is important to keep a couple of things in mind as you embark on a new and unpredictable journey.
LICRA stands for learner-initiated, complex, and reciprocally adaptive research. A mouth full! Let's break it down:
Learner-initiated -- meaning the researcher has a topic in mind but decides the desired end product and the means and information retrieval it will take to get there.
***It is important to note that the researcher might not know what their research will look like, or exactly what they are looking for. This is more of a discovery experience.***
This includes composing multiple keyword phrases that would result in the desired information. A good way to go about doing this is to contemplate different views or perspectives on the topic. Think both positive and negative perspectives, and possible reactions to the topic of choice. Also, specific keyword development techniques such as using quotation marks to find exact phrases, and adding the phrase 'link:' to include backlinks (links directed toward a site) can yield better search results. Another tactic to try is researching specific websites.
Complex -- meaning there are multiple strategies and levels of research that should be employed once keyword phrases are decided and the researcher begins. This includes following all leads and questions that pop into your mind; as well as, researching the results from all perspectives on the topic.
Possible Strategies to Employ During Search Process:
--View the multiple presentations of content on the site, including images and videos.
--Take note of quoted information and where it derives. Taking note of an organization or person that has an opinion. References on the site could lead to another avenue of information regarding the topic.
--Review the research and resources used and consulted by the site. This information can provide insight into the development of the content, and the validity of the content. The research used could be directed more to your topic of interest or could lead you in a new direction.
Reciprocally Adaptive -- this is the idea that two or more topics or ideas can be connected or related in some fashion. You as a researcher should be flexible and modify your research topic and desired result. But this new connection and relationship could provide new insight regarding a topic. Flexibility with your research and it's direction is key to the development of new ideas. |
Now every tree can be a birdfeeder. No other food attracts more birds than
Jim's Birdacious Bark Butter.
• The European Starling was introduced into North America when the "American Acclimatization Society" for European settlers released some 80-100 birds in Central Park (New York City) in 1890-91. The head of this particular organization, Eugene Scheiffelin, desired to introduce all birds ever mentioned in the works of William Shakespeare.
• Since its introduction into North America in 1891, European Starling populations have grown to over 200 million birds and they can now be found coast to coast and in Alaska.
• The European Starling, introduced to North America in 1891, has had a significant impact on our native birds. In particular, its intense competition for nesting cavities has had a negative impact on many cavity-nesting species such as Bluebirds, woodpeckers and Purple Martins.
• Rather than clamping their bill shut, starlings’ jaw muscles work to force it open giving them a great advantage when digging for grubs, worms, and bugs in the yard.
• Starlings, as members of the Sturnidae family, are cousins to the Mynah bird and are outstanding mimics. Individuals have been known to mimic the calls of up to 20 different bird species.
• Starlings have an impressive array of songs and may have a repertoire of over 60 different types.
• Starlings were at one time considered a game bird in Europe, and were hunted for food.
• Starlings often return to the same nest cavity to raise their young each year.
• Bird banding records show the longest known life-span for a Starling in North America to be over 15 years old.
• European Starlings have a highly adaptable diet and eat a wide variety of foods, such as snails, worms, millipedes, and spiders, in addition to fruits, berries, grains, and seeds.
• Starlings can play an important role in reducing the numbers of some of the major insect pests that damage farm crops.
• Starlings in the Midwestern United States migrate south in the winter, but starlings in the East tend to be year-round residents. Young birds migrate farther than older birds.
• Migrating flocks of Starlings can reach enormous numbers; flocks of 100,000 birds are not uncommon.
• The European Starling is one of only three birds not protected by the United States government. The House Sparrow and the pigeon are the other two. |
Structural Biochemistry/Nucleic Acid/Phosphate
||This page or section is an undeveloped draft or outline.
You can help to develop the work, or you can ask for assistance in the project room.
In organic chemistry, a phosphate, or organophosphate, is an ester of phosphoric acid. Organic phosphates are important in biochemistry and biogeochemistry.
The backbone of the DNA strand is made from alternating phosphate and sugar residues. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings.
As you noticed in the deoxyribose sugar, it does not contain a hydroxyl group on the 2' carbon. This absence of the hydroxyl group allows greater stability because the absence of hydroxyl group allows the 2' carbon to resist hydrolysis. This is one of the reasons why the hereditary material is stored in the DNA and not RNA. However, the net negative charge of the phosphate group must be stabilized by metal ions, such as magnesium or manganese.
In the molecular bonding of the deoxyribonucleotide (DNA) and ribonucleotide(RNA), phosphodiester bond is a strong covalent bond between a phosphate group and two 5-carbon ring. The phosphate group contains a negative charge as it bonds to a 3' carbon in one ring and a 5' carbon in another ring.
The phosphodiester is formed when a single phosphate or two phosphates break away and catalyze the reaction by DNA polymerase. dATP would dissociate one phosphates in order to form a phosphodiester bond with a deoxyribose sugar from a nucleotide during the process of DNA elongation.
(DNA)n + dATP <------> (DNA) n+1 + Ppi
Phosphodiesterase is an enzyme that breaks a cyclic nucleotide phosphate due to incorrect hydrolysis of phosphodiester bonds. Phosphodiesterase will be an important clinical significance in repairing DNA sequences. |
These days, it seems like there are more electromagnetic signals in the air than oxygen, what with television, radio, 4G, Wi-Fi, and satellites all streaming wirelessly around us. Researchers from Georgia Tech have found a way to harvest enough energy from these wireless transmissions to power small electronics.
Manos Tentzeris, a professor in the Georgia Tech School of Electrical and Computer Engineering, led the development of an ultra-wideband rectifying antenna used to convert microwave energy to DC power. This energy-scavenging antenna is made using an inkjet printer to combine sensors, antennas and super-capacitors onto paper or flexible polymers. The process uses silver and other nanoparticles, and is similar to the current manufacture of sensors and antennas.
The researchers demonstrated that the antenna can generate hundreds of microwatts from just TV bands alone. So far, scientists have been able to power a temperature sensor using the transmission energy from a television station half-a-kilometer away. A more powerful system culling energy from a wider spectrum of signal frequencies--anywhere between 100MHz to 15GHz or higher--could generate one milliwatt of electricity or more, which is enough to power microprocessors or sensors.
Researchers say their system could be used by itself as a energy producer or in tandem with solar cells at night. Another possibility is the system could be used with RFID technology, or to send distress signals for other defunct generators and refrigeration systems.
Like this? You might also enjoy…
- Art Project Writes Out Your Email in Pen, Gives You Geeky Nostalgia
- Two Flying Cars Seek Road Approval; Future Finally Here
- New Memory Works When Wet, Feels Like Jell-O |
From the Spring 2014 Conservationist for Kids
Let's Go Exploring!
By Gina Jack & Jeremy Taylor
You can explore the outdoors every day when going to and from school. As a class, pick a date to start recording the natural things everyone sees along their school routes. Whether you walk, travel by car or bus, or take the subway, keep your eyes open for plants and animals like birds, mammals, and even insects. After a week, make two maps of your community on large sheets of paper. On one map, record what everyone saw on their way to school, such as insects on flowers, birds on wires or squirrels in trees. On the other map, do the same thing for the way home. Compare the two maps and list which things are the same and which are different. You can also do this with your friends, your family, or even by yourself when going other places.
Observing wildlife is fascinating.
Use natural materials found outside to build a shelter you can hide in. While hiding, watch the animals around you without them seeing you. Keep a journal describing what you see and hear. Invite friends or family to join you and share with them what you've observed.
Put together a backpack or tote bag of items to use while exploring outdoors. Include things to help you observe and record your findings and other items to keep you comfortable and safe. Always bring a buddy, and tell an adult where you're going.
- Notebook and pencil (including colored pencils)
- Sunscreen and hat
- Water and snacks
- Compass (know how to use it)
- Whistle (only for an emergency)
- Map (know how to read one)
- First aid kit
- Insect repellant
What else could you include?
Take an over/under hike.
Look up at tree tops. Flip rocks and logs over to see what lives under them. (Be sure to return rocks and logs to the way you found them.) Record the findings in your journal.
Watch out for ticks and poison ivy so you can avoid them. Some ticks carry diseases, and poison ivy can leave you with an itchy rash. |
Japanese relocation during world war ii affected 117,000 people of japanese operated by a proprietor of japanese ancestry during a pre-evacuation. The japanese internment the internment of japanese americans during world war ii was the result of japanese-american evacuation and resettlement during world. Debating the issues in world war ii: be taken against residents and canadians of japanese ancestry for the mass evacuation was not because of the. Japanese-american internment rounded up 120,000 people of japanese ancestry for researching the internment of japanese-americans during world war ii.
Trump and the 75th anniversary of the japanese of japanese-americans during wwii: 'evacuation' of the to do” with any mass evacuation of japanese. An overview of world war ii japanese american relocation sites nearly 113,000 people of japanese ancestry but opposed to any mass evacuation of. Japanese americans underwent during world war ii their actions against people of japanese ancestry in an “evacuation,” which implies the. Which resulted in the mass evacuation and internment of japanese americans during world war ii 1976: bamboo people based on their ancestry during world war ii. German and italian detainees (along with 2,264 people of japanese ancestry) secret history of italian american internment and evacuation during world war ii.
Even though we were also fighting them during world war ii these people a mass evacuation even since there were so many people of japanese ancestry. One of the army's largest undertakings in the name of defense during world war ii was the evacuation mass japanese evacuation japanese ancestry.
When an analysis of the when considering an in-depth analysis of an analysis of the mass evacuation of people of japanese ancestry during wwii any given. 74 years ago: the racist internment of japanese people of japanese ancestry during world war ii of all people of japanese ancestry from.
Japanese internment camps were the sites of the forced relocation and incarceration of people of japanese ancestry in the western united states during ww2.
You’ll read roger daniels’ “the decision for mass evacuation of people of japanese in of japanese american during world war ii. Wartime and the bill of rights: the korematsu case during world war ii he believed that people of japanese ancestry. An analysis of the mass evacuation of people of japanese ancestry during wwii pages 5 words world war two, mass evacuation, japanese americans. The japanese internment the logistics of the exclusion of 120,000 people of japanese ancestry to secured areas of farming during wwii.
And there was no need for mass evacuations of japanese for security pushed for wholesale japanese evacuation all people of japanese ancestry. In defense of internment american internment japanese americans japanese ancestry japanese washington west coast evacuation world world war ii. In the united states court of federal claims docket no 98 policy concerning persons of japanese ancestry during world war ii during the evacuation. Images document the places that played a role in the evacuation of japanese americans during world war ii of japanese ancestry evacuation of all. |
American Civil WarArticle Free Pass
- Prelude to war
- The military background of the war
- The land war
- The war in 1861
- The war in 1862
- The war in 1863
- The war in 1864–65
- The naval war
- The cost and significance of the Civil War
The cost and significance of the Civil War
The triumph of the North, above and beyond its superior naval forces, numbers, and industrial and financial resources, was partly due to the statesmanship of Lincoln, who by 1864 had become a masterful political and war leader, to the pervading valour of Federal soldiers, and to the increasing skill of their officers. The victory can also be attributed in part to failures of Confederate transportation, matériel, and political leadership. Only praise can be extended to the continuing bravery of Confederate soldiers and to the strategic and tactical dexterity of such generals as Robert E. Lee, Stonewall Jackson, and Joseph E. Johnston.
While desertions plagued both sides, the personal valour and the enormous casualties—both in absolute numbers and in percentage of numbers engaged—have not yet ceased to astound scholars and military historians. On the basis of the three-year standard of enlistment, about 1,556,000 soldiers served in the Federal armies, and about 800,000 men probably served in the Confederate forces, though spotty records make it impossible to know for sure. Traditionally, historians have put war deaths at about 360,000 for the Union and 260,000 for the Confederates. In the second decade of the 21st century, however, a demographer used better data and more sophisticated tools to convincingly revise the total death toll upward to 752,000 and indicated that it could be as high as 851,000.
The enormous death rate—roughly 2 percent of the 1860 population of the U.S. died in the war—had an enormous impact on American society. Americans were deeply religious, and they struggled to understand how a benevolent God could allow such destruction to go on for so long. Understanding of the nature of the afterlife shifted as Americans, North and South, comforted themselves with the notion that heaven looked like their front parlors. A new mode of dealing with corpses emerged with the advent of embalming, an expensive method of preservation that helped wealthier families to bring their dead sons, brothers, or fathers home. Finally, a network of federal military cemeteries (and private Confederate cemeteries) grew out of the need to bury the men in uniform who had succumbed to wounds or disease.
Some have called the American Civil War the last of the old-fashioned wars; others have termed it the first modern war. Actually, it was a transitional war, and it had a profound impact, technologically, on the development of modern weapons and techniques. There were many innovations. It was the first war in history in which ironclad warships clashed; the first in which the telegraph and railroad played significant roles; the first to use, extensively, rifled ordnance and shell guns and to introduce a machine gun (the Gatling gun); the first to have widespread newspaper coverage, voting by servicemen in the field in national elections, and photographic recordings; the first to organize medical care of troops systematically; and the first to use land and water mines and to employ a submarine that could sink a warship. It was also the first war in which armies widely employed aerial reconnaissance (by means of balloons).
The Civil War has been written about as few other wars in history have. More than 60,000 books and countless articles give eloquent testimony to the accuracy of poet Walt Whitman’s prediction that “a great literature will…arise out of the era of those four years.” The events of the war left a rich heritage for future generations, and that legacy was summed up by the martyred Lincoln as showing that the reunited sections of the United States constituted “the last best hope of earth.”
What made you want to look up American Civil War? |
A major transformation from the mid 1800s into the 1890s not only deals with industrialization and urbanization, but the consumption of goods and material things by the population of all social classes, some more than others. The photo above shows Rollins College's Hamilton Holt at one of his homes during Christmas. This picture reflects the consumptive ethos in the United States because of how grandiose the objects and material things are in his home and how Holt is dressed.
During the 1890s there was a great shift in the amount of people living in cities compared with how many people were living out in the country. The social classes were also developing to the point where they were very far apart and different. There was the working poor who could barely get by and worked factory jobs for a very small amount of money. These were the people who didn't have leisure time and really only had time to work to try and support their families. The family members in most of these families were not very educated and most had to stop schooling so they could go to work and provide for the family. The middle class still worked, but they had a little more room for leisurely activities as well as money to spend. Some middle class families pretended and felt as if they were upper class when really they were just living on credit therefore they looked upper class, but were living in debt. The upper class families lived in the big cities with multiple houses probably in a few towns and cities else where. They had plenty of time to do all sorts of activities like see plays and operas and ride in fancy cars. They had huge houses and while middle class families might have had one or two servants, the upper class could have had 10 or more.
Hamilton Holt is probably of middle to upper class due to the fact that his house is filled with all sorts of material things and looks well kept up. He is dresses fairly nice and because the caption says this is one of his homes, one can infer that he has a few others. He is a graduate of Yale therefore very educated and most educated people became educated because they had money. The objects in his house are more along the decorative side, rather than the necessity side which also leads one to believe he is of an upper class because he can afford to buy and show off these things. These more fancy material things include the fire place, the pictures on the wall in picture frames, the furniture with material coverings not just wood, the small statues which could be made out of marble or other such materials. There are also decorative rugs and a Christmas tree.
This picture is an example of the ethos in the United States because of consumption. The character of the U.S. at this time is growing in a way that all the social classes were diverging father apart, but because of industrialization, methods to produce more products faster lowers the cost of certain things like food and clothing. Also department stores become important parts of business in cities and towns where the people can find all their needs and wants in one place. With the cost of goods lower than before and department stores opening, more people of different social classes are able to shop in the same area, buying almost the same sort of goods. The rich are buying more expensive goods, but the poor are now able to buy a little more than before because the cost is lower. All in all, the shift and transformation of masses of people to cities and industrialization of the cities created more jobs and ways of producing and manufacturing goods faster which lead to prices lowering and the more people consuming such goods. This picture is representative of that because of all the goods in Holt's house and the inferences that can be made about his life and place in society. |
ROUTING PROTOCOLS are the software that allow routers to dynamically advertise and learn routes, determine which routes are available and which are the most efficient routes to a destination. Routing protocols used by the Internet Protocol suite include:
- Routing Information Protocol (RIP and RIP II)
- Open Shortest Path First (OSPF)
- Intermediate System to Intermediate System (IS-IS)
- Interior Gateway Routing Protocol (IGRP)
- Cisco's Enhanced Interior Gateway Routing Protocol (EIGRP)
- Border Gateway Protocol (BGP)
Routing is the process of moving data from one network to another network. Within a network, all hosts are directly accessable and do not need to pass data through a default gateway. All hosts on the same network are directly connected and can communicate directly with each other.
ROUTED PROTOCOLS are nothing more than data being transported across the networks. Routed protocols include:
- Internet Protocol
- Novell IPX
- Open Standards Institute networking protocol
- Banyan Vines
- Xerox Network System (XNS)
Outside a network, specialized devices called ROUTERS are used to perform the routing process of forwarding packets between networks. Routers are connected to the edges of two or more networks to provide connectivity between them. These devices are usually dedicated machines with specialized hardware and software to speed up the routing process. These devices send and receive routing information to each other about networks that they can and cannot reach. Routers examine all routes to a destination, determine which routes have the best metric, and insert one or more routes into the IP routing table on the router. By maintaining a current list of known routes, routers can quicky and efficiently send your information on it's way when received.
There are many companies that produce routers: Cisco, Juniper, Bay, Nortel, 3Com, Cabletron, etc. Each company's product is different in how it is configured, but most will interoperate so long as they share common physical and data link layer protocols (Cisco HDLC or PPP over serial, Ethernet etc.). Before purchasing a router for your business, always check with your Internet provider to see what equipment they use, and choose a router which will interoperate with your Internet provider's equipment.computers they will ever communicate with are on the same network (to get them working in a routed environment, you must bridge the networks). Todays modern networks are not very tolerant of protocols that do not understand the concept of a multi-segment network and most of these protocols are dying or falling out of use. |
- Women in the DCB/DBC
- Winning the Right to Vote
- Women Voting before 1851
- Coming Together to Demand the Vote
- The Right to Vote and Women’s Demands
- Relations with Militants and Organizations in Other Countries
- The Right to Vote in Municipal Elections
- The Right to Vote in Provincial and Territorial Elections
- The Right to Vote in Federal Elections
- Aboriginal Women’s Right to Vote
- Opposition, Indifference, and Doubt
- Suggested Reading
Winning the Right to Vote
The year 1916 was marked by an achievement: women won the right to vote in Manitoba, Saskatchewan, and Alberta. To highlight the centenary of these victories of Canadian women in their struggle for enfranchisement, this thematic ensemble of the “Women in the DCB/DBC” project brings together the biographies of women and men that deal with the history of women’s suffrage in Canada.
In the colonial era the right to vote was granted essentially on the basis of criteria related to age, origin, and property and, for a long time, no law formally prohibited women’s suffrage. During the earliest elections held in New Brunswick in 1785, the Executive Council did indeed limit the right to vote to men, but the restriction was not incorporated into the colony’s first election act, passed in 1791 and sanctioned in 1795 by the British crown. Since there was no statutory restriction, women who wanted to vote and met the criteria did so in New Brunswick and Nova Scotia despite conventions in common law that would otherwise have prohibited the enfranchisement of women, as it did in Great Britain. Some women, however, were refused access to polling stations or had their votes nullified.
The Constitutional Act of 1791 conferred the right to vote in elections for the legislatures of Lower Canada (present-day Quebec) and Upper Canada (present-day Ontario) on British subjects who were at least 21 years of age, had never been sentenced for a serious criminal act or treason, and owned or rented one or more properties of a certain value. There was no specification of gender. Thus, there was nothing to prevent women who met these criteria from voting. Since the Coutume de Paris – civil law – that prevailed in Lower Canada did not check this practice, a few thousand women took advantage of this situation and voted between 1791 and 1849. For their part, the women of Upper Canada, who were subject to the conventions of common law and cultural restrictions similar to those that hindered the exercise of women’s right to vote in the Maritimes, with the exception of a few cases, stayed away from the ballot boxes during this period.
Between 1834 and 1851 legislatures in each colony enacted laws that excluded women from voting. In March 1834 the assembly of Lower Canada adopted a law on contested elections, to which was added a provision prohibiting women’s suffrage. This provision and its adoption were the achievements of members of the legislatures and of the elites, such as Louis-Joseph Papineau and Joseph-Rémi Vallières de Saint-Réal. For several years they had been demanding that the right of the women in Lower Canada to vote be withdrawn. To make their case, they invoked the need to protect women from violence at the polls, and maintained that the act of voting was incompatible with the nature of women. The adoption of such a ban had probably been influenced by the Representation of the People Act, passed in the United Kingdom in 1832, which, for the first time, formally confirmed the restriction of the right to vote to men. In 1836 Prince Edward Island also banned women’s suffrage. However, in February 1837 the Colonial Office disallowed the law on contested elections in Lower Canada – not because of opposition to the provision suppressing women's suffrage, but because of another section of the statute that allowed the legislature’s committees to pursue their work after prorogation. In 1843 the New Brunswick assembly in its turn limited the right to vote to men.
The situation remained unchanged in the Province of Canada at the beginning of the 1840s. Growing ultramontanism in Canada East (Lower Canada; present-day Quebec) and controversy over a closely contested ballot in Canada West (Upper Canada; present-day Ontario) in 1844 seem, however, to have led to new developments. A reformist candidate who lost by four votes in the riding of Halton West challenged seven votes cast by women in favour of his conservative opponent. Not surprisingly, a committee of the legislature controlled by conservatives confirmed the legitimacy of the votes, putting a seal on the reformist candidate’s defeat. In May 1849 the reformist government of Robert Baldwin and Louis-Hippolyte La Fontaine, with almost no opposition, changed the law to prohibit women from voting in elections for the legislature and the municipal councils of the United Province of Canada. Nova Scotia followed suit in 1851. Thus, at the time of confederation in 1867, the country’s four founding provinces disallowed women’s suffrage, whereas, before 1851, women had voted in these provinces. Women were also deprived of the right to vote in federal elections, which were subject at the time to provincial laws.
A suffragist movement took shape from the 1870s, first in Ontario, particularly under the leadership of Emily Howard Jennings (Stowe), and then in the other provinces. Local, provincial, and national associations, such as the Dominion Women’s Enfranchisement Association (1889), were established. Various actions (for example, drawing up petitions, writing newspaper articles, and giving talks) by members were added to many individual initiatives. As was the case elsewhere, Canada’s suffragists had to confront degrees of indifference or opposition from men and women who were unconcerned about the question or who mocked their demands or were openly hostile.
The suffragist movement benefited from the involvement of people fighting for other social reforms, such as the control of the sale of alcohol or Prohibition, the improvement of working conditions and professional opportunities for women, their access to better and higher education, and equality with men. These individuals joined in the struggle with the conviction that gaining the vote for women would help bring about concrete action in response to their own demands. Furthermore, many suffragists in Canada such as Emma Sophia Skinner (Fiske) exchanged ideas, arguments, and strategies with their counterparts abroad through correspondence, participation in international events, communication about speaking engagements in Canada, or involvement in the struggle to gain women’s suffrage in other parts of the world.
Women gradually acquired the right to vote in municipal elections during and after the 1870s. Each provincial legislature determined the eligibility criteria for casting ballots in municipal elections and modified them over the course of the following decades. A sign of divided opinion on the subject was the revocation of this right in some provinces and its subsequent reinstatement. For example, the right was granted in Manitoba in 1887, abolished in 1906, and then granted again in 1907. Authorities in some municipalities also adopted – or tried to adopt – rules to change the criteria. A case in point: the Montreal municipal council considered narrowing them in 1902 by excluding women who were tenants, but gave up the idea when suffragists mobilized against the measure. Many different scenarios based on this issue played out across the country.
The right to vote in provincial ballots was granted in three western provinces (Manitoba, Saskatchewan, and Alberta) in 1916, and in Ontario and British Columbia the following year. These victories, the result of persistent and peaceful efforts of significant figures in the history of women’s suffrage in Canada, such as Helen Letitia Mooney (McClung) and Maria Heathfield Pollard (Grant), encouraged suffragists to increase pressure on Ottawa. Anxious to facilitate its re-election at a time when the debate on conscription was raging across the nation, the government of Sir Robert Laird Borden in 1917 granted the right to vote in federal elections to female British subjects on active military service and to certain female relatives of members of the armed forces (they had to be British subjects and qualified by their age, race, and residence). The measure, decried for its political opportunism both by the opposition in the House of Commons and by many suffragists, prepared the way in May 1918 (less than a month after women’s suffrage had been won in Nova Scotia) for the extension of the right to vote in federal elections to all female British subjects aged 21 or over who met the property criteria in their province of residence – the same conditions as those that applied to men. In 1920 only age and citizenship were retained as eligibility criteria for the majority of both men and women. Some restrictions remained – for example, those specified by legislation that applied to members of certain religions or races – and they would gradually be lifted over the course of the following decades.
New Brunswick, the Yukon, and Prince Edward Island, as well as the Dominion of Newfoundland, which was to join Canada in 1949, followed the federal lead and granted suffrage to women between 1919 and 1925. Suffragists in Quebec, up against the stubborn resistance of the influential Roman Catholic clergy, many members of the political class, and virulent opponents such as Henri Bourassa, had to wait until 1940. Women who lived in the Northwest Territories were finally enfranchised in 1951. Winning the right to vote in the provinces and territories was accompanied by the possibility of running for elective office. However, the right to stand for election, or the right of eligibility, was granted later than the right to vote in Ontario (4 April 1919), and New Brunswick (9 March 1934), and at the federal level (7 July 1919).
Until 1950, renouncing Indian status [see Frederick Ogilvie Loft] was mandatory for aboriginal men and women who wanted the right to vote in federal elections; this policy stemmed from the process of planned assimilation set out in the Indian Act of 1876. There had been some exceptions: for example, in 1944 men who were status Indians and had served during the Second World War gained the unconditional right to vote, and the same right was extended to their wives. In 1951 aboriginal women who were status Indians acquired the right to elect members of band councils and to run for seats on the councils. Then, between 1950 and 1960, the right to vote in federal contests was awarded to aboriginal men and women in exchange for renouncing the right to tax exemptions attached to Indian status. Finally, in 1960, the federal government of John George Diefenbaker, backed by almost all the members of the House of Commons, granted men and women who were status Indians the unconditional right to vote in federal elections, which meant they would not have to renounce their status or certain related rights. Between 1949 and 1969 most provinces also granted voting rights to aboriginal men and women with Indian status.
However, some aboriginal women may have gained the right to vote in federal and provincial elections before the adoption of these laws relative to their status. An aboriginal woman who married a non-aboriginal man, or who married an aboriginal who renounced his Indian status at the time of their marriage, lost her Indian status, and this loss of status, in some cases, made her eligible to vote. Yet a non-aboriginal woman who married a status Indian automatically became a status Indian, and this change in her status meant that her right to vote might be lost.
The biographies dealing with women’s suffrage at the federal, provincial, or municipal level have been placed in the corresponding section. Biographies that do not specify a level of government are to be found in both the section on federal elections and the section on provincial elections (listed by the province or territory of residence).
The most recent complete volume of the DCB/DBC, volume XV, brings together the biographies of persons who died between 1921 and 1930 or whose last known activity dates from that period. As of 30 June 2016, more than 160 biographies in volume XVI (1931–40) had been published online. Even though each week a biography belonging to a volume in preparation is added to the DCB/DBC, many biographies of women at the heart of the fight for women’s suffrage, such as Louise Crummy (McKinney) (volume XVI), Carrie Matilda Derick (volume XVII), and Thérèse Forget (Casgrain) (volume XXI), have yet to be published. A list of suggested readings is provided to help fill in some gaps and guide readers who are interested in learning more. Each relevant biography published online will be incorporated into this thematic ensemble. |
Understanding your child’s assessments at Key Stage 3
Firstly, there are a few points you may need to know about levels before interpretting your child’s assessment. Please click here in order read my key points about levels, and then come back and let’s go over a sample assessment.
Next, there are a few points you may need to know about how we create targets before interpreting your child’s assessment. Please click here in order to read my key points about how we set targets, and then come back and let’s go over a sample assessment.
Progress towards target
Now that we are sure about levels, and we know where the targets have come from, we can look at the column ‘progess against target’. These grades are recorded by the teacher and indicate whether the teacher feels that the child is on course to meet their target. A progress grade of 1 means the teacher feels they might even exceed their target – many students do and we like to celebrate this. A progress grade of 2 means the teacher feels they are on course to meet their target. A progress grade of 3 means the teacher feels they might just fall short of their target a little, we say by two sub levels. And a progress grade of 4 means the teacher feels the child may miss the target level by a whole level or more.
Effort/Organisation/Attitude and Behaviour
Teachers record a 1 against these areas if the child is doing exceptionally well. A grade of 2 means good. A grade of 3 means the child is inconsistent in these areas – so some lessons puts in good effort and behaves very well, but on others he/she does not. A grade of 4 is very serious and means the child is consistently poor in those areas.
Reading a sample assessment
Now we are ready to look at a sample Year 7 assessment and see what it means:-
- This child’s targets are mostly 6s. Given that this child is in year 7, and level 5-6 is the expected average at the end of year 9, this is an academically able child.
- The fact that the targets vary in differnt subjects does NOT mean the child is better or worse in those subjects compared to other students. The different levels reflects the different national average levels.
- In most subjects this child has a progress grade of 2, meaning the teacher feels they are on course to meet this high targets – i.e. progressing as well as similar children from the top 25% of schools.
- in Spanish and French the teacher feels they might not quite meet this level. If the child has not studied these languages much before secondary school, then it is understandable that the teacher feels targets that have come from the average calculation might not be reached. It is not a great cause for concern at this stage, although as the child moves up the school we would expect that prior experience counts for less and targets should be met. Note that the child has still been given a 2 (good) for effort, organisation and attitude & behavour, so their slightly disappointing progress mark has not come about through not trying or being silly.
- In music the teacher also feels that the child might not quite meet his/her target. Music (as well as art and drama, though not noted in this child’s assessment) are subjects where a particular gift or lack of one can create a discrepency between a childs performance and targets that are generated from average year 6 levels in the core academic subjects. Note that the child has been given a 1 (excellent) grade for effort, organisation and attitude & behaviour in music, so the slightly disappointing progress is certainly nothing to do with not trying hard. Nevertheless, it is true that this child is not performing as well in music as other children who had the same high average year 6 levels in the core academic subjects. We would not review the child’s target down because this is the benchmark against which we are all measured.
- In PE we set targets from our own system. They have resulted in a target for this child which the teacher has reported that the child is likely to exceed. We normally like to celebrate all our students who exceed targets. Many do.
- Across all subjects the child has been given 1s (excellent) and 2s (good) for effort, organisation and attitude & behaviour. So this is a child trying very hard in all areas, against targets that are aspirationally high and not quite all being met yet. |
One of the earth's oldest plants, there are more than 10,500 species of the fern throughout the world. Ferns are mostly perennial plants that are either evergreen or deciduous. These plants are vascular plants that thrive in areas with low light and moderate temperatures. The environmental requirements of this plant make the fern an excellent choice for household growing.
Plant the ferns according to the type of fern you've selected. Epiphytic ferns grow naturally in trees and should be planted in coarse, nutrient-rich soil. Mix the coarse soil with equal amounts of organic matter such as organic leaf mold, sphagnum moss or peat moss. Terrestrial ferns grow in soil and should be planted in regular, nutrient-rich potting soil. Select a potting soil that contains peat moss, sand and perlite for increased drainage.
Select a potting container that has a good drainage system. Choose plastic containers over clay containers. Plastic containers maintain better moisture levels. If a wood container is selected, select an untreated, rot-resistant container to reduce the potential of fungal diseases.
Place the fern in a partially shaded to fully shaded area that provides at least four hours of indirect lighting each day. Avoid direct sunlight as this may cause foliage burn. Maintain a moderately tempered, humid environment for the fern. Keep the fern in a room that maintains an average temperature between 65 and 75 degrees F. Avoid temperatures below 60 degrees F.
Water the fern plant regularly but do not overwater. Maintain an evenly moist soil. Check the soil's moisture levels before each watering to reduce the potential for over-watering. Stick your finger in the soil and water when the soil begins to feel dry. Never allow the soil to dry out. Mist the plant daily during the winter months to maintain good humidity levels.
Feed the ferns lightly using a liquid houseplant fertilizer. Select a fertilizer that includes equal amounts of nitrogen, phosphorous and potassium. Apply the fertilizer at half-strength monthly during the fern's growing season from early spring through late fall.
Dust and inspect the plant regularly. Look for signs of insect infestation such as webs and small foliage spots. Look for signs of disease such as yellowing or browning of the plant's foliage, wilting, dieback and drooping. Treat the diseases immediately. Before applying any fungicidal spray, allow the soil's moisture levels to dry to ensure that the plant is not suffering from over-watering. Treat fungal diseases with houseplant fungicidal spray.
Re-pot the fern plant every two to three years. Divide the growing fern by cutting the rhizomes of the plant. Keep as much foliage on each cut as possible. Use a sharp, sterile knife to complete the cut(s). Re-plant the ferns in fresh soil. |
What are the risks for children with regards to keratosis? Is it common? Does the treatment differ from those administered to adults? Are there greater and more serious risks to the health of children if they contract this skin disorder? Keratosis results from the build-up of keratin, a key protein in the structural foundation of hair and nails, on the skin. There are four types of keratosis: seborrheic, actinic, hydrocarbon and pilaris.
Seborrheic keratosis presents as ugly, wart like lesions. They appear in a variety of colors, from light tan to black. They are round or oval in appearance. In fact in many cases they resemble melanoma skin cancers although they are in fact benign. This form of the disorder is acquired by children in two major ways. First, it is entirely possible for children to acquire it through the family genes. Parents who have suffered from the condition will in all probability pass it on to their offspring. Exposure to the dangerous ultra violet rays of the sun is another way of contracting it. Use protective measures (long sleeves, long pants, hats, as well as sunscreen with a high SPF).
Keratosis pilaris (KP), also known as ‘chicken skin’, is characterized by tiny bumps on the skin commonly found on the upper arms, thighs and cheeks. The bumps are rough to the touch, sandpapery in texture, appears flesh colored to slightly red and can be itchy. Commonly seen in children and teens (it can however present as early as infancy), KP is hereditary. It is a follicular condition which occurs when there is an excess build-up of keratin on the skin which entraps the hair follicles in the pores. Treatment of KP is not essentially necessary as it is benign, meaning that there is no risk of evolution into cancer. However, to combat the itching that occurs and to mitigate the unpleasant look and feel of those tiny bumps, a mild peeling agent to remove the dead skin where the build-up of keratin occurs, as well as creams and lotions for use as moisturizers, is applied.
Actinic keratosis is a condition which produces thick scaly, crusty patches of skin which does not typically occur in children. Hydrocarbon keratosis is caused by exposure to ‘polycyclic aromatic hydrocarbons’ and is unlikely to present in children.
Prevention is always best. The kind of keratosis most commonly found in children is often due to sun exposure, close monitoring of your child’s activities will significantly decrease the risk of contracting this condition. |
Naval History and Heritage Command historian Robert J. Cressman answers questions about the attack on Pearl Harbor and the historical impact in a series of video sound bites. Here, he discusses the impact of the attack on the composition of the fleet. Videos may be downloaded from DVIDS.
What is a common misconception of the attack?
Discuss the damage of battleships during the attack on Pearl Harbor and the rise of the aircraft carrier during World War II.
Why did the Japanese not attack the U.S. Navy pacific submarines and what impact did that have on World War II?
Many ships were significantly damaged at Pearl Harbor. Were some of these ships salvaged?
Which ships were lost or damaged at Pearl Harbor?
Return to main page: Pearl Harbor Attack |
Wavelength and resolution explained
Things with long wavelengths are analogous to the basketball in the cave
story because neither can provide too much detail about what they hit.
Things with short wavelengths are like the marbles in that
they can provide you with fairly
detailed information about what they hit.
The shorter the probe's wavelength is,
the more information you can get about the target.
A good example of the wavelength
vs. resolution issue is a swimming pool. If you have a swimming pool
with waves which are 1 meter apart (a 1 meter wavelength) and push a stick
into the water, the pool's waves
just pass around the stick because the
1 meter wavelength means that the
pool's waves won't be affected by such a tiny target.
All particles have wave properties.
So, when using a particle as a probe,
we need to use particles with short wavelengths to get detailed information
about small things. As a rough rule of thumb, a
particle can only probe down to distances
equal to the particle's wavelength.
To probe down to smaller scales, the
probe's wavelength has to be made smaller.
This is all a very hand-wavy explanation of a very
To explain it completely would involve
more math than we have space to get into. |
VISION AND MYOPIA
What is myopia??
Myopia occurs when light rays focus in front of the retina, rather than directly on its surface.
This can be due to two reasons:
The eyeball is too long (increased axial length) or
The cornea and/or lens is too curved for the length of the eyeball.
It could also be a combination of the two.
Why does it occur?
The exact cause of myopia is unknown but there are several factors that can put a child at an increased risk to develop it.
If a parent is nearsighted, there is a greater risk their child will be as well
Excessive near work
"Near-work" such as reading, computer and phone work/games, watching television, etc. causes eye strain which researchers think might cause some degree of myopia.
Lack of outdoor activities
Research has found that children who spend more time indoors are at greater risk than their peers that spend more time outdoors.
Why does it matter?
Myopia can be corrected with basic glasses, contact lenses, or refractive surgery, but this doesn't prevent it from getting worse.
People are becoming myopia at younger ages and in increasing amounts of myopia.
While some people have minor inconveniences with myopia, others have what is called progressive myopia which is severe and degenerative.
Risk of complications increases with higher levels of myopia
Myopia Prescription Severity Ranges:
Mild: -0.25 to -3.00
Moderate: -3.25 to -6.00
Severe: -6.00 or more
For more info click on link below:
Good news! There are now options to decrease progression!
For more info on these myopia management options click on the tab below |
How Brain Mapping Came About
Excerpted from the book: The Brain That Changes Itself
Stories of Personal Triumph from the Frontiers of Brain Science
By Norman Doidge, MD, Penguin Publishing, December, 2007
Brain maps were first made vivid in human beings by the neurosurgeon Dr. Wilder Penfield at the Montreal Neurological Institute in the 1930s. For Penfield, "mapping" a patient's brain meant finding where in the brain different parts of the body were represented and their activities processed — a solid localizationist project. Localizationists had discovered that the frontal lobes were the seat of the brain's motor system, which initiates and coordinates the movement of our muscles.
The three lobes behind the frontal lobe, the temporal, parietal, and occipital lobes, comprise the brain's sensory system, processing the signals sent to the brain from our sense receptors — eyes, ears, touch receptors, and so on.
Penfield spent years mapping the sensory and motor parts of the brain, while performing brain surgery on cancer and epilepsy patients who could be conscious during the operation, because there are no pain receptors in the brain. Both the sensory and motor maps are part of the cerebral cortex, which lies on the brain's surface and so is easily accessible with a probe. Penfield discovered that when he touched a patient's sensory brain map with an electric probe, it triggered sensations that the patient felt in his body. He used the electric probe to help him distinguish the healthy tissue he wanted to preserve from the unhealthy tumors or pathological tissue he needed to remove.
Normally, when one's hand is touched, an electrical signal passes to the spinal cord and up to the brain, where it turns on cells in the map that make the hand feel touched. Penfield found he could also make the patient feel his hand was touched by turning on the hand area of the brain map electrically. When he stimulated another part of the map, the patient might feel his arm being touched; another part, his face. Each time he stimulated an area, he asked his patients what they'd felt, to make sure he didn't cut away healthy tissue. After many such operations he was able to show where on the brain's sensory map all parts of the body's surface were represented.
He did the same for the motor map, the part of the brain that controls movement. By touching different parts of this map, he could trigger movements in a patient's leg, arm, face, and other muscles.
One of the great discoveries Penfield made was that sensory and motor brain maps, like geographical maps, are topographical, meaning that areas adjacent to each other on the body's surface are generally adjacent to each other on the brain maps. He also discovered that when he touched certain parts of the brain, he triggered long-lost childhood memories or dreamlike scenes — which implied that higher mental activities were also mapped in the brain.
The Penfield maps shaped several generations' view of the brain. But because scientists believed that the brain couldn't change, they assumed, and taught, that the maps were fixed, immutable, and universal — the same in each of us — though Penfield himself never made either claim.
If this post strikes a chord with you, we take brain plasticity possibilities a step further in Impossible Dream, the extraordinary story of triumph over disability told from the first-person perspective of a young woman living with autism. |
Syndrome Series: Obsessive Compulsive Disorder
What is Obsessive Compulsive Disorder
Obsessive Compulsive Disorder (OCD) is a condition characterized by the presence of obsessions and/or compulsions. Obsessions are recurrent thoughts, urges, or images that are intrusive and unwanted, while compulsions are repetitive behaviors or mental acts that are applied to the obsessions (or other rules) that are rigidly followed.
Types of Obsessions and Compulsions
The types of obsessions and compulsions vary broadly, although there are common themes. Stereotypical OCD symptoms showcased in media are fear of contamination accompanied by compulsive cleaning (seen famously on the TV show Monk). Other common themes include symmetry (organizing, ordering, or counting compulsions), morality (sexual, aggressive, or religious based compulsions), or harm (checking compulsions for fear of harming others). These themes are seen globally across cultures with minor variances.
The performance of the compulsion is done in an attempt to mitigate anxiety or distress associated with the obsession. Individuals with OCD typically have an impending sense of doom if they don’t perform the compulsions, or they may believe something horrific will occur if they do not perform the tasks. The individual with this condition finds distress in their compulsions and obsessions and avoiding the compulsion or obsession can take up a significant amount of time. Because of this, they may avoid of people or certain places in order to avoid a trigger for a compulsion.
Realistically, the action and obsessions are not connected in any significant way. However, the extent that the individual believes this depends on their insight.
Insight refers to how well the individual recognizes the credibility of their beliefs. They may have good or fair insight in which they realize their disordered beliefs are definitely or most likely untrue; poor insight in which they think their obsessive compulsive beliefs are probably true; or absent insight in which they are completely convinced their disordered beliefs are true.
Prevalence and Transmission
The average age of onset is 19.5 years old, with a quarter of cases starting by the age of 14. Females tend to be affected slightly more than males in adulthood, while males are more affected in childhood. The prevalence in the U.S. is 1.2%, with similar prevalence rates seen globally. There is a 2x rate of familial transmission among first degree relatives with the condition compared to those without first degree relatives with the condition.
First Line Treatments
Treatment options for OCD generally includes psychotherapy and pharmaceuticals. These can be used exclusively or in combination with each other.
Cognitive Behavioral Therapy (CBT) is one of the first line treatment options for OCD. CBT is effective in treating OCD by helping the individual become aware of the cognitive distortions present that are leading to their compulsive behavior. Once identified, the clinician can work with the patient to untangle how the obsession and compulsion are not directly related and ultimately remove the desire to complete the compulsion when faced with a trigger.
In addition to standard CBT, there is another type of CBT called Exposure and Response Prevention (ERP) that can be highly effective in the treatment of OCD. With this type of therapy, the client is systematically exposed to gradually increasing levels of the trigger for their compulsions and assisted in learning how to reject the compulsion.
Mayo clinic reports the following antidepressants approved by the U.S. Food and Drug Administration (FDA) to treat OCD:
- Clomipramine (Anafranil) for adults and children 10 years and older
- Fluoxetine (Prozac) for adults and children 7 years and older
- Fluvoxamine for adults and children 8 years and older
- Paroxetine (Paxil, Pexeva) for adults only
- Sertraline (Zoloft) for adults and children 6 years and older
Additional Treatment Options
Other treatment options may be considered if first line treatments fail. These include Deep Brain Stimulation and Transcranial Magnetic Stimulation. These options are typically reserved for cases in which neither first line treatments have been found to be effective, and are generally used in patients over the age of 18. In both these treatment options, different neurological regions of the brain are stimulated using electrodes in order to suppress compulsive thoughts and behaviors.
Ready to learn more?
Give our question banks a try- FREE- using our Free Trial! Or if you’re ready to take the plunge, check out our Question Banks and find the perfect fit for you! Or, contact us with any questions you have so we can get you on the right path today!
American Psychiatric Association. (2022). Diagnostic and statistical manual of mental disorders (5th ed., text rev.). https://doi.org/10.1176/appi.books.9780890425787 |
UNAM honorary doctor Joanne Chory has been working on a solution for carbon sequestration for more than a decade. It is based on the fact that plants can take carbon dioxide (CO2) from the air through photosynthesis and turn it into biomass. This is because the Earth's soils contain a lot of carbon - about 2,300 gigatons (Gt) of carbon at a depth of three meters, which is about three times the current CO2 reserve in the atmosphere.
It is thought that cropland and pasture soils, which cover about five billion hectares of land around the world, have a huge capacity to store carbon. This, along with the existing agricultural infrastructure, makes it possible to use genetics to improve traits related to plant-mediated carbon sequestration, he said in an interview.
Several plant traits are good candidates for helping to store carbon. One of these is root biomass, which determines root inputs and stores about five times more carbon than the same amount of above-ground litter, according to the 2020 winner of the Pearl Meister Greengard Prize.
"We decided that with this initiative we had to tap into some element of global distribution, and what we've done work with corn and rice seeds in their wild forms, but you can also work with soybeans, sorghum, and canola. "These species have a wide global distribution," the researcher explained.
Many plants could be used in the project, but they must have certain qualities. For example, they must have mechanisms that increase carbon sequestration, resist decomposition by soil microorganisms, and live longer in soils. This means that the final plants must be able to withstand a complex interaction between chemical composition, physical occlusion of carbon within soil aggregates, the formation of stable organo-mineral complexes, and the connectivity of the water flora.
"The modified plants are still in the research stage in the laboratory because there is still a long way to go before taking them to the field." "But we have tried to avoid GMOs (genetically modified organisms), and what we are trying to do is edit the chain using CRISPR sequencing techniques (a gene editing tool that "cuts" segments of a cell's DNA)," Chory explained.
Root biochemistry also plays a role in decomposition, and the amount of the natural product suberin in the roots is a major candidate trait. This is a lipophilic complex made up of long-chain fatty acids and polyaromatic compounds. It may be a good source for carbon sequestration because it is biochemically stable, interacts with soil minerals, and gets stuck in topsoil microaggregates.
Chory explains in a paper that was just published in the journal Plant Cell in 2022 that the best plant needs to store suberin in the cell walls of its root cells and grow a large root system. To do this, candidate genes affect how the root system is built and how many roots it has been chosen and put together with specific root promoters and genes that make suberin.
Using both traditional (breeding) and newer (editing the genome, genetic engineering) methods, the ideal plant is made by adding beneficial alleles and genes that increase root biomass and transgenes that increase suberin deposition in the root.
It is expected that, in addition to trapping more carbon, they will add carbon polymers that are hard to break down to carbon-depleted soils. The 2018 Gruber Genetics Award winner stressed that plant development is still in the lab stage for now.
Chory thinks that several tests are one of the problems that need to be solved before crops can be used to store carbon. At the moment, it is thought that the last plants will be able to take in up to 1.85 gigatons of carbon per year in just 30 cm of cropland. If the roots went deeper and had a different biochemical makeup, they could store more carbon.
Time is running out because every year that goes by without a big drop in carbon will hurt billions of people and reduce the variety of life on our planet. "We know this is not the only solution, but we are inviting creative people to come up with different ideas, and together we can do something different than what we are doing today."
Joanne Chory, who also won the 2019 Princess of Asturias Award for Technical and Scientific Research, said that the lack of progress shown at the last Conference of the Parties is disappointing because no country is meeting its goals.
Scientists and the general public must work together to solve this big problem because countries and governments haven't been able to cut their emissions. So, she wants to be in charge of the Plant Harnessing Initiative, which aims to remove carbon dioxide from the air.
The scientist from the Salk Institute explained: "We only have eight years left until 2030 to make a change, and all nations must work together to do it. The change will cause chaos, but eight years is not a long time, so we must act now. This is a global problem that we all need to work on. Scientists and politicians alike need to think about how they can help. It's time to check the box, so that's what we're doing ". |
Because crayfish and lobsters live their lives moving backward, they have an unusual internal plumbing system. The kidney is located in front of the mouth, so the gill circulation can carry the wastes away from the body. If the kidney outlet was near the back end as in most creatures, the wastes would be carried to the gills. This perfect design enables crayfish and lobsters to live efficiently, whether very slowly crawling forward or rapidly swimming backward.
Crayfish are freshwater crustaceans resembling small lobsters (to which they are related). They are also known as crawfish, crawdads, freshwater lobsters, mountain lobsters, mudbugs, or yabbies. … Crayfish feed on animals and plants, either living or decomposing, and detritus. |
Capacitors are most commonly used in power supply systems, analog circuits, and tuned circuits. The primary purpose of capacitors is to store energy released later on when needed.
Resistors are commonly used to charge capacitors, but what happens when you don’t have one? How do you charge a capacitor without a resistor?
In this article, we will learn how to charge a capacitor without a resistor by using variable voltage sources and variable resistance, so you can understand the basic principle behind charging and discharging a capacitor.
How To Charge Capacitor Without Resistor
To charge a capacitor, you must connect it to a complete circuit that includes a power source, a route, and a load. The current does not flow across a circuit without a load and hence does not charge a capacitor.
You can use a load other than a resistor to charge a capacitor; you can charge a capacitor by connecting it to a battery; it will discharge into the capacitor. You can also use a light bulb with an adequate voltage.
What Is A Capacitor?
A capacitor is a two-terminal passive electrical component storing charge in an electric field circuit. This energy can be used later, for example, to drive an oscillator.
It is used in everyday devices like radios, televisions, and flashlights, but capacitors have been around for many years.
Capacitors are made up of two parallel plates, each of which has a charge, separated by a small distance( an insulator ) called the dielectric. When there’s no voltage applied to the plates, they are said to be in parallel.
When applying an electrical current to the plates, they become charged with opposite polarity charges.
This attraction causes the electrons to move toward one side of the capacitor because electrons have a negative charge. A positive charge is attracted to the other side of the capacitor.
When no current flows through the circuit, the capacitor stores electric potential energy or voltage. This property makes it useful in many devices, including radios and televisions.
When a capacitor is placed in series with a voltage source, the voltage across the capacitor will always lag behind the applied voltage.
This phenomenon is called capacitive reactance and can be observed by connecting any capacitor (or combination of capacitors in parallel) to an AC voltmeter.
What Is Internal Resistance?
Internal resistance causes changes in current flow voltage drop and reduces the power factor. Internal resistance is known as “parasitic” or “series” resistance. This type of resistance is due to the following;
- interconnections of the various components in a circuit or the wire itself.
- Interaction of the charges carriers – electrons and holes – with the material through which they are traveling.
Capacitance measures the amount of electric charge stored for a given potential difference. The use of different materials creates capacitors, each having its characteristics.
The capacitance and voltage rating are fixed physical characteristics of a capacitor. Hence, a larger plate has less resistance than a smaller plate, as it provides a more extensive surface area for electrons to flow between them.
The size of a capacitor is called its package, and its expression is usually in terms of the number of capacitors (C) contained in that package.
Everyday use of capacitors is in;
- Electronic circuits store electrical energy and release it on demand temporarily, recovery, filtering, timing, power factor correction, and other electrical applications.
- They have many other applications, including signal processing and acting as an electronic memory.
What Is The Role Of Resistor In Charging Capacitor?
A resistor is also known as Ohmic Resistor. It is used to reduce the current flow in the circuit and control the voltage in the circuit.
When a large current flows through a resistor, the resistor gets heated, reducing its resistance value.
Ohm’s law describes the concept of resistance. They are popularly used to reduce the current amplification in a circuit or for voltage regulation, or both purposes.
Resistors are used with capacitors to control how much current flows through the circuit. They have a resistance that limits the flow of electrons and ultimately controls the voltage in the circuit.
As current flows through a resistor and charges up a capacitor, the capacitor’s voltage will rise; when fully charged, the current ceases to flow, and the voltage of the capacitor remains constant.
A resistor added to the circuit limits how much current could flow through it. This controls how much charge can be stored in the capacitor and thus controls how high its voltage will rise. A potentiometer acts as a variable resistor to control how much resistance is present in a circuit.
How To Charge A Capacitor Without Resistor
When you connect a capacitor to a voltage source, an electric field forms and charges the capacitor with energy until it reaches the same potential as the source.
- Using a 100uF capacitor, you’ll connect the capacitor to a 9V source and then place an inductor between the positive terminal of the battery and one terminal of the capacitor.
- You then connect the capacitor to an oscilloscope, and you can observe its voltage while applying different frequencies through the inductor.
The vital thing to know is that a capacitor will store energy when attached to a battery.
- The capacitor will charge and discharge through the resistor, but if you attach an inductor parallel with the resistor, the inductor will do all of the heavy lifting.
When the voltage across the inductor changes, it creates a current opposite direction. If there is no resistor to consume energy while charging, the energy will keep going around and around in circles until it dissipates as heat.
An inductor will store and release far more energy than a resistor, which means it can release energy in a short, high-current burst.
In this case, the inductor’s abrupt release of energy will cause a significant voltage spike on the output, which may be able to blow out our fuses or even damage the components.
For this reason, it is essential to make sure that there is always a resistor between an inductor and the ground. You may charge the capacitor to a higher voltage than the source by connecting a diode to the inductor.
What Is An Inductor?
When you connect an inductor to a source of AC, it tends to resist any changes in voltage or current.
A typical example of inductors is transformers (large wire coils) used in power supplies and electronic circuits. Their ability to store energy is what makes them ideal for these applications.
Michael Faraday invented it in 1831. Inductors are used in all kinds of electronics and are even found in everyday devices. You can use an inductor to filter out specific frequencies, regulate the power supply flow, create oscillation, and more!
The inductor and capacitor create a series resonant circuit. The diode disconnects the capacitor from the source when the inductor current reaches zero, and the capacitor achieves a full charge.
Process Of Charging A Capacitor Without A Resistor
To charge a capacitor without a resistor, you will have to use the concept of voltage division. This means that you should connect the resistor and capacitor in parallel to be affected by the same voltage. If done correctly, you can charge a capacitor without a resistor.
The maximum value of the charging current occurs when there is no resistance between the two terminals of a capacitor.
- Connect your capacitor to a power supply(could be a battery or light bulb)
- Connect a cable or another device that can act as an electrical switch between this charge and the ground.
- Pick an electrical probe with a high enough conductance to establish touch without causing too much disruption to their movement.
- Connect one end of this gadget to each side of the capacitor, but make sure no metal meets each other inside since this might produce a short circuit.
- Increase the voltage gradually until sparks shoot from both sides at the same time
- Measure the voltage across the capacitor with your voltmeter.
- Once fully charged, minimal current will flow out of them until another channel is available for it to take. Remember that the longer you keep it connected, the longer your capacitor will charge.
- Remove the wire or switch because electrons will no longer be traveling back and forth between the plates, even though they are still charged. You may need to replace it with something capable of conducting electricity.
It’s possible to charge a capacitor without the help of a resistor. You can do so with a battery or a light bulb. However, it would be best to use a resistor to keep your capacitors from overcharging, which can damage them.
You also need to carefully monitor the voltage and current; if too much current passes through the capacitor for too long, it will explode and become useless. |
The autonomic nervous system (ANS) controls several basic functions, including:
You don’t have to think consciously about these systems for them to work. The ANS provides the connection between your brain and certain body parts, including internal organs. For instance, it connects to your heart, liver, sweat glands, skin, and even the interior muscles of your eye.
The ANS includes the sympathetic autonomic nervous system (SANS) and the parasympathetic autonomic nervous system (PANS). Most organs have nerves from both the sympathetic and parasympathetic systems.
The SANS usually stimulates organs. For example, it increases heart rate and blood pressure when necessary. The PANS usually slows down bodily processes. For example, it reduces heart rate and blood pressure. However, the PANS stimulates digestion and the urinary system, and the SANS slows them down.
The main responsibility of the SANS is to trigger emergency responses when necessary. These fight-or-flight responses get you ready to respond to stressful situations. The PANS conserves your energy and restores tissues for ordinary functions.
Autonomic dysfunction develops when the nerves of the ANS are damaged. This condition is called autonomic neuropathy or dysautonomia. Autonomic dysfunction can range from mild to life-threatening. It can affect part of the ANS or the entire ANS. Sometimes the conditions that cause problems are temporary and reversible. Others are chronic, or long term, and may continue to worsen over time.
Diabetes and Parkinson’s disease are two examples of chronic conditions that can lead to autonomic dysfunction.
Dysautonomia is a general term used to describe a breakdown or abnormal function of the ANS. The autonomic nervous system controls much of your involuntary functions. Symptoms are wide-ranging and can include problems with the regulation of heart rate, blood pressure, body temperature, perspiration, and bowel and bladder functions. Other symptoms include fatigue, lightheadedness, feeling faint or passing out (syncope), weakness, and cognitive impairment.
Orthostatic intolerance refers to impairment in the body’s ability to handle gravity. When a person stands, blood pools in the abdomen and legs. Normally, the autonomic nervous system will compensate by constricting blood vessels and pushing the blood to the brain. When autonomic pathways are damaged, these reflexes, termed baroreflexes, do not function adequately. As a result, the person becomes dizzy, light-headed, and may faint.
In addition, digestion is controlled by the autonomic nervous system. When the ANS malfunctions, the “victim” commonly develops gastrointestinal problems. Symptoms include nausea, bloating, vomiting, severe constipation, and abdominal pain.
Autonomic dysfunction can occur as a secondary condition of another disease process, like diabetes, or as a primary disorder where the autonomic nervous system is the only system impacted. These conditions are often misdiagnosed.
Over one million Americans are impacted b
y a primary autonomic system disorder. The more common forms of these conditions include:
- Orthostatic hypotension (OH)
- Orthostatic intolerance (OI)
- Postural orthostatic tachycardia syndrome, also known as postural tachycardia syndrome (POTS)
- Neurogenic bowel (gastroparesis, intestinal dysmotility, constipation)
- Erectile dysfunction and neurogenic bladder
Types of autonomic dysfunction
Autonomic dysfunction can vary in symptoms and severity, and they often stem from different underlying causes. Certain types of autonomic dysfunction can be very sudden and severe, yet also reversible.
Different types of autonomic dysfunction include:
Postural orthostatic tachycardia syndrome (POTS)
POTS affects anywhere from 1 to 3 million people in the United States. Nearly five times as many women have this condition compared to men. It can affect children, teenagers and adults. It can be also associated with other clinical conditions such as Ehlers-Danlos syndrome, an inherited condition of abnormal connective tissue.
POTS symptoms can range from mild to severe. Up to one out of four people with POTS have significant limitations in activity and are unable to work due to their condition.
Neurocardiogenic syncope (NCS)
NCS is also known as vasovagal syncope. It’s a common cause of syncope, or fainting. The fainting is a result of a sudden slowing of blood flow to the brain and can be triggered by dehydration, sitting or standing for a long time, warm surroundings and stressful emotions. Individuals often have nausea, sweating, excessive tiredness, and ill feelings before and after an episode.
Multiple system atrophy (MSA)
MSA is a fatal form of autonomic dysfunction. Early on, it has symptoms similar to Parkinson’s disease. But people with this condition usually have a life expectancy of only about 5 to 10 years from their diagnosis. It’s a rare disorder that usually occurs in adults over the age of 40. The cause of MSA is unknown, and no cure or treatment slows the disease.
Hereditary sensory and autonomic neuropathies (HSAN)
HSAN is a group of related genetic disorders that cause widespread nerve dysfunction in children and adults. The condition can cause an inability to feel pain, temperature changes, and touch. It can also affect a wide variety of body functions. The disorder is classified into four different groups depending on age, inherited patterns, and symptoms.
Holmes-Adie syndrome (HAS)
HAS mostly affects the nerves controlling the muscles of the eye, causing vision problems. One pupil will likely be larger than the other, and it will constrict slowly in bright light. Often it involves both eyes. Deep tendon reflexes, like those in the Achilles tendon, may also be absent.
HAS may occur due to a viral infection that causes inflammation and damages neurons. The loss of deep tendon reflexes is permanent, but HAN isn’t considered life-threatening. Eye drops and glasses can help correct vision difficulties.
Autonomic dysfunction can affect a small part of the ANS or the entire ANS. Some symptoms that may indicate the presence of an autonomic nerve disorder include:
- dizziness and fainting upon standing up, or orthostatic hypotension
- an inability to alter heart rate with exercise, or exercise intolerance
- sweating abnormalities, which could alternate between sweating too much and not sweating enough
- digestive difficulties, such as a loss of appetite, bloating, diarrhea, constipation, or difficulty swallowing
- urinary problems, such as difficulty starting urination, incontinence, and incomplete emptying of the bladder
- sexual problems in men, such as difficulty with ejaculation or maintaining an erection
- sexual problems in women, such as vaginal dryness or difficulty having an orgasm
- vision problems, such as blurry vision or an inability of the pupils to react to light quickly
- You can experience any or all of these symptoms depending on the cause, and the effects may be mild to severe. Symptoms such as tremor and muscle weakness may occur due to certain types of autonomic dysfunction.
Orthostatic intolerance is a condition whereby your body is affected by changes in position. An upright position triggers symptoms of dizziness, lightheadedness, nausea, sweating, and fainting. Lying down improves the symptoms. Often this is related to an improper regulation of the ANS.
Orthostatic hypotension is a type of orthostatic intolerance. Orthostatic hypotension occurs when your blood pressure drops significantly as you stand up. This can cause lightheadedness, fainting, and heart palpitations. Injury to nerves from conditions like diabetes and Parkinson’s disease can cause episodes of orthostatic hypotension due to autonomic dysfunction.
Other types of orthostatic intolerance due to autonomic dysfunction include:
- postural orthostatic tachycardia syndrome
- neurocardiogenic syncope or vasovagal syncope
Many health conditions can cause autonomic neuropathy. It can also be a side effect of treatments for other diseases, such as cancer. Some common causes of autonomic neuropathy include:
Abnormal protein buildup in organs (amyloidosis), which affects the organs and the nervous system.
Autoimmune diseases, in which your immune system attacks and damages parts of your body, including your nerves. Examples include Sjogren’s syndrome, systemic lupus erythematosus, rheumatoid arthritis and celiac disease. Guillain-Barre syndrome is an autoimmune disease that happens rapidly and can affect autonomic nerves.
An abnormal attack by the immune system that occurs as a result of some cancers (paraneoplastic syndrome) can also cause autonomic neuropathy.
Diabetes, especially with poor glucose control, is the most common cause of autonomic neuropathy. It can gradually cause nerve damage throughout the body.
Certain medications, including some drugs used in cancer chemotherapy.
Certain infectious diseases. Some viruses and bacteria, such as botulism, Lyme disease and HIV, can cause autonomic neuropathy.
Your doctor will treat autonomic dysfunction by addressing the symptoms. If an underlying disease is causing the problem, it’s important to get it under control as soon as possible.
Often, orthostatic hypotension can be helped by lifestyle changes and prescription medication. The symptoms of orthostatic hypotension may respond to:
- elevating the head of your bed
- drinking enough fluids
- adding salt to your diet
- wearing compression stockings to prevent blood pooling in your legs
- changing positions slowly
- taking medications like midodrine
- Nerve damage is difficult to cure. Physical therapy, walking aids, feeding tubes, and other methods may be necessary to help treat more severe nerve involvement.
While certain inherited diseases that put you at risk of developing autonomic neuropathy can’t be prevented, you can slow the onset or progression of symptoms by taking care of your health in general and managing your medical conditions.
Follow your doctor’s advice on healthy living to control diseases and conditions, which might include these recommendations:
Control your blood sugar if you have diabetes.
Avoid alcohol and smoking.
Get appropriate treatment if you have an autoimmune disease.
Take steps to prevent or control high blood pressure.
Achieve and maintain a healthy weight.
Inherited disorders. Certain hereditary disorders can cause autonomic neuropathy. |
The Theory Sound and light sensitivity
Can our hypothesis also explain the sound and light sensitivity of brain fatigue?
The brain has limitations in how much can be processed and reach conscious areas at a time. Therefore, proper sorting or filtering systems are needed. When repeating information, nerve cells in the healthy brain will reduce their intensity in signalling after a while and, for example, the sound is no longer recognised. We talk about adaptation of the signalling. This does not work for persons suffering from brain fatigue. They describe that everything is recognised, important as well as unimportant information, and that it becomes very tiring and difficult to handle all the impressions. The signalling does not adapt which explains why the affected are very easily disturbed and they cannot maintain focus. If the astrocytes’ handling of glutamate is impaired, it means that incoming information becomes more nonspecific and is perceived as new. It is then not filtered out but reaches up to higher brain centres for processing. In this context, it is important that we are aware that today we know very little about the way the brain works. |
What accurately is Architecture?
- by siteadmin
Architecture is a way of expressing a human's capabilities and needs through the creation of buildings, cities, and infrastructure. It also reflects the culture and heritage of the people.
Every piece of architecture is able to evoke different experiences and feelings in the viewer. This experience is largely dependent on the combination of design elements and principles, the colors used, the materials used, and the composition of the building.
It is a form of art.
Art is a form of human creativity that involves the expression of technical proficiency, beauty, or emotional power. It includes painting, sculpture, printmaking, drawing, decorative arts, photography, and installation.
Architecture is a form of art that uses creativity to design buildings. It is often used for social change and to make people feel more connected to one another.
The art of architecture consists of many different types of structures and can include anything from a building to an entire city. It is important to understand that each type of architecture has its own purpose, meaning, and style.
One of the most common misconceptions about architecture is that it only focuses on purely aesthetic purposes. However, architects design buildings to meet the needs of their patrons, and this includes utilitarian functions.
It is a form of communication.
Architecture is a form of art that can communicate culture and history to people across the world. It is also a useful medium for designers to express aesthetic principles and adhere to societal constraints while still creating spaces that appeal to the senses and evoke feelings.
For an architect, communicating their designs can be a complex process. They will have to explain their ideas to clients, contractors, and suppliers, who might all require different levels of detail.
Some architects claim that architecture is a language, much like music or sculpture. While this may be an appropriate metaphor, it fails to capture the full picture of architectural communication.
It is a form of technology.
Architecture is a form of technology that involves the use of various tools to design buildings and structures. It also includes infrastructure that supports human activities, such as roads and tunnels.
Technologists define technology as the rational process of creating means (tools, devices, systems, methods, and procedures) to order and transform matter, energy, or information to accomplish certain valued ends. Examples include computer technology and medical technology.
However, it's important to note that technology can be used in a negative way, such as through political oppression and war. This is why it's important to understand how technology can impact our culture.
Architectural technology includes the use of computers to create designs and help architects plan their projects. It can also involve eco-friendly building methods and space-saving techniques.
It is a form of design.
Architecture is the art of designing buildings and structures. It is a complex discipline that requires both creativity and technical expertise.
The design process focuses on the relationships between elements and how they work together to create a unified structure. It involves observing optimal proportions, scale, and symmetry.
It also requires skill in the use of contrast and other nuances. Its goals are to create structures that are beautiful, functional, and durable.
It also affects human wellbeing on a personal level, and studies have shown that occupants who are comfortable in their buildings and spaces feel more engaged and productive and take less sick leave. Sterile, concrete landscapes and unimaginative buildings can cause stress, which is why architects seek to create more natural spaces for people to connect with their environment.
A building material is a type of construction material utilized throughout the building process. It might be either natural or man-made.
The materials used in architecture have a significant impact on the overall design and aesthetics of a building. Selecting the correct materials for your project can increase its durability, sustainability, and environmental friendliness.
The materials used in architecture have progressed over time, and a building can now be created out of almost anything. New materials are constantly being produced, while old ones are being re-invented.
Architecture is a way of expressing a human's capabilities and needs through the creation of buildings, cities, and infrastructure. It also reflects the culture and heritage of the people. Every piece of architecture is able to evoke different experiences and feelings in the viewer. This experience is largely dependent on the combination of design elements… |
[amazon_link asins=’B000Z981D4,B003Z4ECKC,B01MQ1HSOD,B071XL5G52,B01JS285JS,B015G8VLNA,B00B4CPKWQ,B0747LKXGM,B071RP4X4J’ template=’ProductCarousel’ store=’finmeacur-20′ marketplace=’US’ link_id=’31d789f2-118d-11e8-a97b-d928627123f5′]
Unlike conventional light-sensing cells in the retina-rods and cones, melanopsin-containing cells are not used for seeing images.
Instead, they monitor light levels to adjust the body’s clock and control constriction of the pupils in the eye, among other functions.
“These melanopsin-containing cells are the only other known photoreceptor besides rods and cones in mammals, and the question is, how do they work,” said Michael Do, a postdoctoral fellow in neuro-science at Johns Hopkins.
“We want to understand some fundamental information, like their sensitivity to light and their communication to the brain,” he informed.
They found that these cells are very insensitive to light, in contrast to rods, which are very sensitive and therefore enable us to see in dim light at night, for example.
According to Do, the melanopsin-containing cells are less sensitive than cones, which are responsible for our vision in daylight.
“The next question was, what makes them so insensitive to light? Perhaps each photon they capture elicits a tiny electrical signal. Then there would have to be bright light-giving lots of captured photons for a signal large enough to influence the brain. Another possibility is that these cells capture photons poorly,” said Do.
To figure this out, the team flashed dim light at the cells. The light was so dim that, on average, only a single melanopsin molecule in each cell was activated by capturing a photon.
They found that each activated melanopsin molecule triggered a large electrical signal. Moreover, to their surprise, the cell transmits this single-photon signal all the way to the brain, said a Johns Hopkins release.
Yet the large signal generated by these cells seemed incongruous with their need for such bright light. “We thought maybe they need so much light because each cell might also contain very few melanopsin molecules, decreasing their ability to capture photons,” said King-Wai Yau, a professor of neuroscience at Hopkins.
When they did the calculations, the research team found that melanopsin molecules are 5,000 times sparser than other light-capturing molecules used for image-forming vision.
“It appears that these cells capture very little light. However, once captured, the light is very effective in producing a signal large enough to go straight to the brain,” said Yau.
Related articles by Zemanta
- Gene therapy restores vision to mice with retinal degeneration
- Blind Ambition: Getting at the Root of Vision Problems
- Nano-Sized Semiconductor Dots Could Fix Your Terrible Vision [Nanotech]
- Mindreading 101? Identifying images by watching the brain
- Gene Therapy Restores Vision To Mice With Retinal Degeneration
- Tripping Over Your Materialism:
- Trying to Repair Roomba Scheduler
- Ellex 2rt Retina Regeneration Therapy: a First Report
- Can a Human See a Single Photon? (via Google Reader) |
How making supports integrative and informed thinking
Makerspace learning at Proctor Elementary
In this final post of our series on how maker-centered learning can help students develop transferable skills, we take a look at Integrative and Informed Thinking.
During EMMA’s visit to Proctor Elementary School, in Proctor VT, the potential for maker-centered learning to support students’ integrated and informed thinking really came to life. Once again, the Design Thinking process was used to guide the making, providing a structure within which students could build knowledge and systematize their thinking.
The more students create and make, the more they see the world through a different lens.
They start to see the world as made up of systems that work together to provide a certain function or outcome. Each part of the system interacts with other parts to influence the intended outcome.
The students were given a challenge – to make something that teaches someone something. By following the design thinking protocol they were able to create something that was meaningful to them and helpful to others! Students started by using empathy to carefully consider what they would teach and to whom they would teach it.
Game plan in hand, students arrived at the build session and were greeted by materials and tools provided by EMMA. When we unveiled the fact that EMMA was filled with PinBox 3000 kits, students were ecstatic and came quickly volunteered to help unload EMMA.
Eager to start building, we moved to the IDEATE (brainstorming) phase of Design Thinking. The ideas that were flying during the IDEATE session were influenced by both the materials that lay in front of students and the game plan they had produced during the EMPATHY and DEFINE phases.
Soon the tone of the conversations moved from the wild ideas to creating a prototype. It was easy to spot the different types of learners in the room as they took on different roles in designing parts of a game system. With so many ideas and materials at play, students needed to negotiate with each other to justify why their ideas would best meet the outcomes they had planned for.
The prototypes that emerged ranged from games that would make repetitive practice more engaging (math skills games) to models of more complex systems like deer hunting ethics.
Soon, it was time to switch from “making” to explaining and listening. Students took turns sharing their prototypes, asking questions, and offering feedback.
As the school day came to to an end, the students were not ready to stop.
The room that had been filled with people, parts and tools now contained prototypes of game systems and well articulated plans for next steps.
Through making, students had a chance to engage with many of the performance indicators for Integrated and Informed Thinking.
The research process forced students to “synthesize information from multiple sources.” The PinBox games represented student efforts to “develop and use models to explain phenomena.” Many students had “applied systems thinking” to represent complex systems, such as the ethics of deer hunting or the progression of geological change, through their models.
EMMA’s day at Proctor Elementary School is a good example of the ways that a rich making experience cuts across the Transferable Skills. Through each of these EMMA visits, maker-centered learning provided so many opportunities for students to grow in a wide variety of areas that include cross-curricular skills, discipline-specific skills and knowledge, and social emotional areas as well.
We believe that once students have had an organic, empowering, and authentic making experience, reflection rooted in the Transferable Skills can help students solidify their learning and identify evidence of growth and proficiency.
How do you tie making to transferable skills?
All posts in the series:
One thought on “How making supports integrative and informed thinking”
Pingback: 6 ways teachers are using Padlet - Innovation: Education |
Endorphins are natural chemicals produced in the body to reduce pain and boost happiness. They are most often associated with exercise since the release of these “feel-good” chemicals cause a state of euphoria and is usually known as a “runner’s high.” However, most any exercise will cause this state of happiness and it is also boosted through laughter and excitement.
In recent years, studies involving endorphins have begun to focus on how this chemical contributes to learning. Physical activity is essential to brain development. Basically, when we feel good, we learn better. Intellectually stimulating the brain when endorphins have been released, helps even more. For the last five years, neuroscientists have been encouraging parents and teachers to work on stimulating the good feeling chemicals in the brain. The mind-body connection is a powerful thing.
When working with children and teens, it’s important to remember this and help develop the whole self. By stimulating the positive neurotransmitters in the body, we will combat the cortisol and, therefore, have more happy children and teens. Physical activity leads to happiness, happiness leads to better learning, better learning leads to increased knowledge, increased knowledge leads to more confidence and so on.
Now that we understand the neuroscience surrounding endorphins, how can we, as parents, teachers, coaches, and anyone who works with children, use this information? We must create a learning environment that releases endorphins so that students apply more effort and are better able to focus. Our program does this by teaching with the brain in mind and utilizing game-based learning. Along with this, two of the Teaching skills that are used in class are specifically designed to increase the students’ endorphin levels.
1) Up The Rep: The use of “up the rep” as a teaching skill in class helps the students have more energy throughout an exercise, which ultimately leads to them exhibiting more effort. For example, if students are practicing side kicks on a bag and they are told to do 50, the goal is for the 50th kick to be the best one. However, students often start out full speed and their energy depletes as they get closer to the 50th rep. The best thing to do is have them start out their reps easier and increase their power as they get to 50. That way they end with their best one yet! This gives the students a rush of endorphins and they finish the exercise feeling stronger.
2) Neurobics: The use of “neurobics” in class helps the students by increasing their neural stimulation and, therefore, they become more focused. For example, if the students are doing pushups, instead of counting to 10, count in colors or characters, or even count backwards. This will increase the neural firing in their brains and keeps their minds from wondering.
By utilizing these techniques, the endorphins in the students’ brains increase, and then they feel better and, therefore, learn better. The combination of having more energy and being cognitively stimulated leads to more effort and focus in class! |
What is atomic number set equal to?
The atomic number of an atom is equal to the number of protons in the nucleus of an atom or the number of electrons in an electrically neutral atom. For example, in a sodium atom, there are 11 electrons and 11 protons. Thus the atomic number of Na atom = number of electrons = number of protons = 11.
What is atomic mass number equal to?
Together, the number of protons and the number of neutrons determine an element’s mass number: mass number = protons + neutrons.
What is the number of atoms equal to?
The number of atoms of ANY substance in a volume is: # of atoms = N * (density) * volume / (Molecular Weight). N is a constant called Avogadro’s number and its equal to 6.022*1023 atoms/mole. It can also be molecules per mole.
What is the atomic number equal to Class 9?
Atomic number of an element tells about the number of protons present. The number of protons present will be equal to the number of electrons. Therefore, the atomic number of an atom is equal to both the number of electrons and protons present in an atom.
What is the atomic number Class 10?
The atomic number is the number of protons in the nucleus of an atom. The number of protons define the identity of an element (i.e., an element with 6 protons is a carbon atom, no matter how many neutrons may be present).
Are protons and electrons equal?
The number of electrons in a neutral atom is equal to the number of protons.
What is the unit of atomic mass?
The atomic mass unit (AMU or amu) of an element is a measure of its atomic mass. Also known as the dalton (Da) or unified atomic mass unit (u), the AMU expresses both atomic masses and molecular masses. AMU is defined as one-twelfth the mass of an atom of carbon-12 (12C).
What is carbon 14 atomic number?
Two different forms, or isotopes, of carbon are shown below: Carbon-12: with 6 protons and 6 neutrons and an atomic mass of 12. Carbon-14: with 6 protons and 8 neutrons, and an atomic mass of 14.
What is the formula for a mole?
1 mole is a number equal to. 022 x 10 23 particles, also known as the Avogadro’s constant. To calculate the number of moles of any substance in the sample, we simply divide the given weight of the substance by its molar mass.
What is the formula for atomic weight?
Atomic Weight=(% abundance isotope 1100)×(mass of isotope 1)+(% abundance isotope 2100)×(mass of isotope 2) + … Similar terms would be added for all the isotopes.
Why are atoms equal?
When an atom has an equal number of electrons and protons, it has an equal number of negative electric charges (the electrons) and positive electric charges (the protons). The total electric charge of the atom is therefore zero and the atom is said to be neutral.
Does Z stand for atomic number?
The atomic number (represented by the letter Z) of an element is the number of protons in the nucleus of each atom of that element. An atom can be classified as a particular element based solely on its atomic number.
What is difference between atomic number and atomic mass?
The atomic number is the number of protons in an element, while the mass number is the number of protons plus the number of neutrons.
What is a Valency Class 9?
Valency is simply equal to the number of electrons gained, lost or shared by an atom of an element to achieve the nearest noble gas configuration. For example, the valency of sodium (Na) is 1, magnesium (Mg) is 2, Chlorine (Cl) is 1 etc.
What is the atomic number equal to 6?
The element carbon (C) has an atomic number of 6, which means that all neutral carbon atoms contain 6 protons and 6 electrons.
Are protons and neutrons equal?
Inside an atomic nucleus, the number of neutrons can be greater than, equal to, or lower than the number of protons. In lighter nuclei, the number of neutrons is almost equal to the number of protons. While in the case of heavier nuclei, the number of neutrons is always greater than the number of protons. |
"Phylogenies," or evolutionary trees, are diagrams that illustrate how certain plants or animals supposedly evolved and branched out from common ancestors. Charles Darwin drew one, usually referred to as his "tree of life," in one of his notebooks. Scientists since then have compiled thousands of phylogenies, but they continue to conflict with one another, presenting a confused and contradictory picture of evolutionary history.
Authors of a recent study published in the Proceedings of the National Academy of Sciences noted that most evolutionary trees do not show extinctions, but instead depict an ever-increasing diversification of species over time. However, the fossil record does show extinctions, and the study authors wrote that this inconsistency "is puzzling, and it casts serious doubt on phylogenetic techniques [using evolutionary trees] for inferring the history of species diversity."1
This admission should signal the fundamentally flawed nature of Darwinian evolution's premise that complex life evolved from simpler forms. Are the countless published phylogenies all to be distrusted? Other evolutionists have thought so, since "an onslaught of negative evidence" consistently plagues the whole tree-building enterprise.2
Since patterns drawn from evolutionary trees contradict patterns drawn from the fossil record, the scientists of this particular PNAS report proposed a new method of building evolutionary trees that they thought might fix this problem. They factored into their phylogeny-building equations rapid evolution, slow evolution, no evolution (called "stasis"), and even reverse evolution (extinctions). This should supposedly help build more historically accurate phylogenies in cases where groups of animals or plants "lack a reliable fossil record."1
The researchers attempted to demonstrate their new technique by applying it to cetaceans, an order of swimming mammals that includes whales and dolphins. They formed phylogenies for five "primary cetacean groups" and then averaged the results to depict the total number of species over evolutionary time.1 But why couldn't they just have analyzed all cetaceans at once? In the end, their analysis appeared to manipulate the data until they very loosely fit the cetacean fossil record.
Both the cetacean fossil "history" and phylogeny used in the PNAS study were built on evolutionary assumptions. That circular reasoning was far removed from the actual data and hardly represents an objective approach. Despite its effort to rescue the use of evolutionary trees in tracing evolutionary histories, this report merely succeeded in emphasizing their consistent failure to match even evolutionary interpretations of the fossil record.
Since the fossil record does not contain any hints of molecules-to-man evolution, it makes sense that evolutionary trees continually conflict with it. Fossils instead show that creatures were created as distinct life forms from the beginning.3
- Morlon, H., T. L. Parsons, and J. B. Plotkin. Reconciling molecular phylogenies with the fossil record. Proceedings of the National Academy of Sciences. Published online before print September 19, 2011.
- Lawton, G. 2009. Why Darwin Was Wrong About the Tree of Life. New Scientist. 2692: 34-39.
- Gish, D. 2006. Evolution: The Fossil Record Still Says, No! El Cajon, CA: Institute for Creation Research.
* Mr. Thomas is Science Writer at the Institute for Creation Research.
Article posted on September 30, 2011. |
ROCCO SCANZA: Let's begin with the common definitions of conflict. Scholars who study conflict have developed many different definitions for this phenomenon. It is possible to distinguish between three types of definitions-- broad, narrow, and mid-range.
These definitions stem, among other things, from a more deep-rooted perspective on how common conflict is. Keep in mind that the scope of our definition for conflict will play a crucial role in the way we approach its resolution.
Those who view conflict as a broad and omnipresent phenomenon define it as a dynamic process underlying organizational behavior. In other words, conflict is a function of virtually every interaction and behavior.
Those who view conflict as a more narrow and bracketed phenomenon define it as a breakdown in the decision-making process. According to this definition, conflict is the result of a dysfunctional process and is therefore an exception and not the rule.
Finally, the middle ground approach to conflict views it as a state in which the behavior or goals of one actor or actors are to some degree incompatible with the behavior or goals of some other actor or actors. According to this approach, conflict is neither an ever-present fact of life, nor a mere sign of process dysfunction. Rather, conflict is the product of goal and behavior misalignment between two or more parties.
Each of these definitions may appeal to you differently, depending on your personal experience or intellectual understanding of conflict. However, as we proceed into our discussion of conflict resolution, we will see that it is the last definition that provides us with the clearest prescription of how to manage and resolve conflict.
Building on the mid-range definition, some scholars have attempted to pinpoint the key elements of which a conflict episode is founded on. First, conflict is based on opposing interests. Second, the parties to a conflict recognize the existence of their opposing interests.
Third, each party believes that the opposing party intends to block the fulfillment of their interests. Fourth, a conflict episode is, in essence, a process that is influenced by the past interaction between the parties. And finally, conflict entails actions taken by each party in an effort to block the other party's interest.
Now that we have a better handle on what conflict is, let's turn to a discussion of a number of different conflict dimensions. One of the key dimensions of conflict is the level at which it occurs. Conflict can take place at four possible levels.
First, conflict can be intrapersonal. This is the most micro level of conflict and takes place within the individual. For example, when you struggle between two potential plans for the evening, you are experiencing what can be called intrapersonal conflict.
Second, conflict can take place at the interpersonal level. Any conflict between two or more individuals takes place at this level. Arguing with a friend about plans for the evening is an interpersonal conflict.
Third, conflict can take place at the intra-group level. This type of conflict takes place within a given group, be it a sports team, a labor union, a for-profit organization, or a nation.
Finally, conflict can take place at the inter-group level. This is conflict that arises between two or more groups.
It is important to note that when conflict takes place at one of these levels, it does not negate the existence of simultaneous conflict at one or all of the other levels. For example, when conflict occurs between a union and the company, this does not mean that there is no conflict within the union and/ or company management.
Identifying the level at which conflict takes place is extremely central in the process of diagnosing and treating conflict. Now we will turn to the different sources of conflict.
There are many different issues that can give rise to conflict. It is impossible to list and discuss these issues, since they are practically endless. However, researchers have attempted to provide a categorization of dominant sources of conflict.
For example, conflict can stem from value differences between two or more parties. We witness conflicts that stem from such differences every day in the political realm. Conflict between Democrats and Republicans over the use of human embryos for the purpose of medical advances is a perfect example.
Conflict can also stem from status differences between the parties. Workplace conflict between managers and their employees is a good example of how status can serve as a source for such conflict.
Conflict is often the result of scarce resources. As we have discussed, conflict can be defined as a misalignment between different parties and goals. Where there is a lack of resources, such misalignment is exacerbated, since the actions of one party have a higher chance of coming at the expense of other parties, thereby accentuating the misalignment.
Conflict can also be the result of external or environmental pressures. For example, in the context of a workplace setting, pressure placed on employees from upper management may lead to horizontal conflict between peers.
In addition to these sources, conflict may also result from role obligations, diversity insensitivity perceptual differences, and competition. As with the different levels of conflict, identifying the underlying sources of conflict is a key first step in the process of understanding and resolving a given conflict.
A final conflict dimension important to note, especially in the context of organizations, is the distinction between different types of conflict. Organizational scholars have developed a typology of three primary types of conflict in organizations-- relationship conflict, task conflict, and process conflict.
Relationship conflict is conflict that centers around how two or more parties get along. For example, in an organizational setting, when two colleagues experience conflict over how each one is talked or acted towards one another, this is referred to as relationship conflict.
Task conflict is conflict that centers around how to perform or conduct a given project or assignment. In other words, this is not conflict over a personal issue, but rather over the substantive task at hand.
Finally, process conflict is conflict that centers on the rules and procedures that govern the way work is conducted. For example, conflict regarding the way in which decisions are made is a process conflict. As with the previous conflict dimensions, it is important to properly diagnose the type of conflict before attempting to address it.
Furthermore, research has shown that these three different types of conflict have different consequences in terms of organizational outcomes. For example, studies have demonstrated that in certain circumstances, both task and process conflict can be beneficial to a team or organization. Relationship conflict, on the other hand, is consistently associated with negative outcomes.
It is important to note that while the distinctions between the different types of conflict may be easy to do in theory, they are extremely difficult to delineate in practice. Relationship conflict can often spill over into both tasks and process conflict, and vice versa.
We now have a clearer idea of what conflict is and where and why it takes place. But what are the implications about conflict? Is conflict by definition a negative phenomenon? In fact, historically the literature on conflict viewed it as a sign of dysfunction.
However, in the 1950's, sociologists began challenging this dominant view of conflict. According to these scholars, conflict was in fact associated with some negative repercussions, but in addition there are positive implications to conflict. For example, conflict could highlight a certain dysfunction in an organization that demands attention but that would have otherwise gone unnoticed.
More recently, organizational behavior scholars have also been emphasizing these two faces of conflict. Such scholars have demonstrated how task conflict, for example, increases the evaluation of different options and can therefore lead teams that experience such conflict to outperform teams with no task conflict.
The existence of both positive and negative implications for conflict underscores the complexity associated with managing and resolving conflict. How can conflict be addressed so as to enhance its benefits and minimize its costs? We will return to this question in our discussion of dispute resolution options.
Let's now turn to some of the specific benefits and costs of conflict. We have already mentioned some of the possible benefits of conflict, but there are additional ones.
For example, conflict can stimulate innovation. By introducing a number of conflicting opinions and approaches, conflict can force a team or organization to think outside its traditional box. In other words settings with little or no conflict may remain stagnant and lack the variation in options and alternatives that can induce creativity or improve the decision-making process, and finally, lead to improved performance.
In addition, challenge by conflict-- parties are forced to clarify and articulate their positions. This can often be of great benefit, even if the original position prevails. Notice that the majority of the benefits listed stem from conflict that can be categorized as task or process conflict. Here again we see that differentiating between categories of conflict is crucial.
Our discussion of the possible benefits of conflict does not imply that there are no downsides to conflict. In fact, conflict can come at a very high personal and organizational cost.
First and foremost, conflict may entail a high monetary cost. The time and energy spent by individuals and organizations in order to resolve and respond to conflict has a real cost. The energy spent confronting and dealing with conflict can create high levels of stress and burnout and low levels of satisfaction, which have a cost not only to the individuals involved, but also to the group organization they are affiliated with.
Conflict can also obstruct communications and therefore impede the natural and vital interactions between the parties. Furthermore, left untreated, conflict can create an environment with high levels of distrust and suspicion. In an organizational setting, all of these can lead to a low level of organizational commitment. Finally, as discussed earlier, conflict can interfere with relationships.
Now that we have discussed the different definitions, dimensions, and consequences of conflict, we need to address the question of how to analyze and assess a given conflict episode. What are the central criteria that need to be examined?
The following seven criteria are a partial list of the considerations that need to be examined in the assessment of conflict. Notice that these considerations begin with the individual parties involved in conflict and gradually broaden the scope to examine the environment and third-party actors.
First, it is important to analyze the nature of the parties involved in the conflict episode. What are the individual or group characteristics of the parties in the conflict? This can help in identifying underlying values or perceptions that could be fueling the conflict at hand.
Second, it is important to assess what the past relationship between the conflicting parties is. What is the nature of the relationship between the parties? Are there any status differences between the parties? Understanding the nature of the relationship can also help to clarify the underlying sources of conflict.
Third, we must examine the specific issue giving rise to the conflict at hand. For example, is the conflict episode centered on a task related issue, or is it centered on a relationship related issue?
Fourth, the environment in which the conflict episode is occurring is also important to assess. In the workplace setting, we might seek to examine the general organizational culture in which the conflict resides. We should ask questions such as, what are the external pressures driving the parties to the conflict and their surroundings?
Fifth, and related to the environmental consideration, we must examine the nature of third parties who have an interest in the conflict. Who is the audience to whom the conflict at hand relates?
Sixth, we must examine the tactics and strategies employed by each of the parties to the conflict. As we will discuss later, there are a number of strategies that parties can use in order to resolve conflict. Before deciding which ones are appropriate, we must assess how the parties have handled the conflict prior to intervention.
Finally, we must assess the consequences of the specific conflict episode. As we discussed, it is crucial to evaluate the negative and positive consequences of conflict.
Before concluding this section of our discussion, it is necessary to note that there are different perspectives regarding conflict in terms of its inevitability and possibility for treating or managing conflict. Understanding these perspectives is central to the effort to resolve and manage conflict. How one views the nature of conflict dictates the strategies and tactics employed in the process of managing and resolving it.
Some scholars argue that conflict is an inevitable and natural fact of life. According to this perspective, there is little organizations can do to eliminate conflict. This is not to say that conflict cannot be treated or managed.
Other scholars maintain that conflict is not an inherent, integral part of all activity. According to this perspective, certain actions and measures can eliminate and repress conflict.
Second, regardless of how one views the inevitability of conflict, the question remains whether it can be treated. In other words, are there steps that individuals and organizations can take in an effort to treat and resolve conflict once it arises?
Here too there is considerable debate about whether and to what degree conflict can be treated. Understanding one's underlying perspective regarding the capacity to treat conflict shapes the way conflict is dealt with when it occurs.
Finally, assuming that conflict can be treated, there is great debate as to the appropriate manner in which to treat conflict so as to harvest its benefits and minimize the costs. For example, some claim that the court system is where the majority of conflicts should be resolved. Others maintain that there are alternative and more appropriate forms in which conflict should be addressed.
It is at this point that we will turn our discussion to some of these alternative mechanisms for dealing with conflict.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact email@example.com if you have any questions about this request.
All interactions, all relationships, include the possibility of conflict. In the business world, great strides have been made in recent years in understanding conflict and mitigating its negative effects. This study room presents various definitions and characterizatons of conflict, and then discusses how conflicts can be resolved through negotiation, mediation, and arbitration.
This video is part 2 of 4 in the Conflict Resolution series. |
There is no doubt that remote sensing technology has created a dramatic shift in the past few years concerning how scientists and researchers gather and analyze information about the Earth. Remote sensing, the use of satellites or aircraft to gather data about objects from a distance, has an almost infinite number of applications. This kind of technology has been used to monitor the environment, map the oceans, explore the Polar Regions, and much more. Now, a form of remote sensing technology called LiDAR is being used to lead a revolution in archaeology transforming how scientists understand human activity of the past.
Changes in how archaeologists study the past are being brought about by advances in LiDAR technology. LiDAR, which stands for Light Detection and Ranging, is a method of remote sensing that uses light to measure varying distances to the Earth. This light is in the form of a pulsated laser, and these pulses can be used to produce exact data about the characteristics of Earth’s surface. LiDAR instruments are made up mainly of a laser, a special GPS receiver, and a scanner typically attached to an airplane or helicopter for use over a wide area.
One of the places that LiDAR is having a significant impact in is the archaeological study of New England. Today, New England is heavily forested, which makes it extremely difficult for archaeologists to get a better understanding of how the region looked in colonial times. During the 1700s, New England was covered with roads, farm walls, and homesteads, but after they were largely abandoned in the 1950s, the forests grew back. Through the use of LiDAR, however, archaeologists are now able to uncover more of this ‘lost’ New England of subsistence farming, something many people have no idea existed.
One of the principal researchers in this archaeology revolution of New England is Katharine Johnson from the University of Connecticut. Her research using LiDAR revealed a large amount of archaeological finds in both Connecticut and Massachusetts, areas that were critical for the earliest European settlers of North America. Johnson discovered sites that weren’t in any historical records, and with GPS coordinates from LiDAR, she says that she can walk into the woods and find building foundations or stone walls that no one imagined would be there. This shift in archaeology has been benefited by improvements in LiDAR technology with 1-meter (3.2 foot) resolutions now available.
Beyond New England, the application of LiDAR in archaeology has been included other areas of the world. LiDAR has been used to help researchers uncover ancient Maya buildings, roads, and other features of this civilization and even create a three-dimensional map of a Maya settlement in Belize. LiDAR has also been employed in order to get high-resolution models of Renaissance palaces, like the Salone dei Cinquecento in Florence Italy. In England, LiDAR is being used to discover new sites in the plains of Stonehenge.
Overall, the future of LiDAR in archaeology is bright. Scientists and researchers are just now discovering what this technology can do to better our understanding of past civilizations once thought as lost. On the other hand, there are limits to its application. For example, LiDAR cannot tell archaeologists much about Native Americans in the U.S. because they didn’t leave behind permanent structures. For those civilizations that did, though, LiDAR is becoming a significant research tool for archaeologists around the globe.
Johnson, Katharine More Information on, and William B. Ouimet. “Rediscovering the Lost Archaeological Landscape of Southern New England Using Airborne Light Detection and Ranging (LiDAR).” Journal of Archaeological Science 43 (2014): 9-20. Retrieved 10 Jan. 2014: <http://www.sciencedirect.com/science/article/pii/S0305440313004342>.
Vergano, Dan. “”Lost” New England Revealed by High-Tech Archaeology.” National Geographic. National Geographic Society, 03 Jan. 2014. Retrieved 14 Jan. 2014 <http://news.nationalgeographic.com/news/2014/01/140103-new-england-archaeology-lidar-science>. |
The image above depicts a clever trick played on battlefields during World War II: Bobbing next to a sturdy metal tank is a rubber inflatable copy meant to fool enemies. An army could look twice as large as it was thanks to elite divisions of the military that specialized in the art of decoys and deception.
Military units within both the Allied and Axis forces practiced and deployed an assortment of peculiar, yet effective tactics, from building inflated dummy tanks to constructing wooden artillery and straw airplanes. A fleet of dummy tanks could lead an enemy to overestimate a force’s actual strength or draw an attack away from a vulnerable area, explained Gordon Rottman in World War II Tactical Camouflage Techniques.
“Decoys are extremely important in deception planning,” stated an U.S. army field manual published in 1978. Something as simple as “a log sticking out of a pile of brush can draw a lot of attention and artillery fire.”
New photos uncovered by the National Archives reveal the elaborate artistry behind building a “fake army.” The featured photos taken between 1942 and 1945 depict the variety of creative deception tactics developed by the Japanese, German, and British military.
During both World Wars, artists, filmmakers, scientists, and sculptors were handpicked by the military and called upon to use their visual and creative skills to design camouflage and decoys. Beginning in World War I, artists used “dazzle camouflage” and painted battleships with odd, multicolored patterns to distract far-off enemies, while female art students designed camouflage “rock” suits that they tested in Van Cortlandt Park in New York.
The United States recruited over a thousand men from art schools and ad agencies for the 23rd Headquarters Special Troops, or “Ghost Army,” which staged more than 20 battlefield deceptions between 1944 and 1945. In England, a group of surrealist artists started the Industrial Camouflage Research Unit just after the war began in September 1939, wrote Peter Forbes in Dazzled and Deceived: Mimicry and Camouflage.
The dummies took on many forms, including stationary structures that supplied the outline of machinery and a simulation that was mounted on a truck. Inventions could be simple and crude, such as stacking up old tires and propping up a log to simulate an artillery piece, explained Kenneth Blanks in his thesis on tactical decoys.
On the other hand, some deceptions were large-scale, such as fake roads and bridges made out of canvas and burlap. At a distance, the elaborate dummy tanks could easily be confused for the real thing. They were made of an assortment of canvas and plywood, inflated rubber, and drain pipes to form the gun. A Japanese fake tank constructed out of rubble and volcanic ash was commended for its attention to detail and artistry. Inflated tanks were not only used to trick the enemy, but also served to practice formations.
Many of the dummies were also easy to transport and assemble. An inflatable tank could be unfurled from a duffle bag, pumped with air from a generator, and completed in just 20 minutes.
Watch soldiers set up inflatable decoys in the video below:
Entire decoy airfields were made by Britain’s Royal Air Ministry. Instead of hiding the easily spotted structures, they designed dummy airfields filled with dummy planes that were imitations of satellite stations. The unit also lit oil fires, called “starfishes,” in harmless locations after the first wave of a bombing raid, making subsequent waves believe those areas were targets, explained Forbes. While preserving the real fleet, the tactic wasted the enemy’s bombs and ammunition.
And these efforts proved to be extremely effective. For example, in the summer of 1940, Colonel J.F. Turner of the Royal Air Ministry organized 100 dummy airfields and built about 400 dummy aircrafts to confuse German aerial bombers. In one raid on August 4, 1940, three waves of bombs struck the decoy structures, leaving the real factory almost unscathed. Turner’s sophisticated dummy aircrafts “saved hundreds of lives and vital war production facilities,” wrote Blanks. Similarly, the U.S. military’s Ghost Army saved tens of thousands of soldiers’ lives, estimated The Atlantic.
This ingenious craft has since faded with time. Sophisticated surveillance technologies, such as satellites and drones, have since made ballooned tanks, straw airplanes, and other visual ruses less effective. But the decoy armies of World War II remain a captivating example of the intricate art of military deception and trickery in action.
Explore more dummy installations from World War II below.
*Correction: An earlier version of this story misspelled a location in Italy. It is Forte dei Marmi, not Forte dei Maimi. |
The flashcards below were created by user
What determines the function of the word in a sentence in Greek?
- word order or case ending?
what issues affect which case ending is used in specific instances?
Carries the meaning of the word. When you take the case ending off a noun you are left with the stem.
either masculine, feminine, or neuter. a noun has only one gender and it never varies.
the difference between the singular and plural is indicated by the case endings sigma and iota
in Greek, there are basically three inflectional patterns used to create the different case endings. Each pattern is called a "declension".
what do the different declensions affect?
only the form of the case ending - no bearing on the meaning of the word.
what declension are nouns that have a stem ending in alpha or eta?
what declension are nouns with a stem ending in omicron?
what declension are nouns with stems than end in a consonant?
how many declension can a noun belong to?
only 1. since the final letter of the noun stem determines its declension.
What does it mean if a word is indeclinable?
some words in Greek are indeclinable, such as personal names and words borrowed from other languages. their for does not change regardless of their meaning or function in the sentence.
what is the primary function of the Nominative Case?
indicates the subject of the sentence.
What is the primary function the Accusative Case?
indicates the direct object of the verb.
the masculine and feminine case endings are often ____.
Neuter nominative and accusative singular are always ________.
this is also true of the plural.
context will usually show you whether the word is the subject or direct object.
all first declension nouns that have eta in the singular
Nominative and Accusative; Definite Article |
Here two different topics, each topic you have to make it three discussions. You can see the question in each instruction which is A, B,C, D. just give answer of those questions regarding instruction.
1) The Well-Tempered Critic (graded) In grammar school, you were asked to summarize what a story, poem, play, or essay was about. The emphasis was merely on understanding plot. In high school, you began to deepen your understanding of literature by looking beyond the plot to theme, character development, symbolism, and applications and connections to life.
In college, we take this to the next level and begin to understand a variety of critical approaches that deepen our understanding of any text on many levels: Freudian, historical, feminist, deconstructive, and hermeneutical approaches. We will begin by defining some of these approaches and then by applying them to a specific literary text.
a) Define Freudian, feminist, and historical textual analyses. Apply one of these approaches listed below to one of this week’s texts. How does having a critical approach help deepen your understanding of the possible meaning a text may have?
b) Can feminist readings occur in the relative absence of overtly stated feminist causes? How is this possible?
c) What is the value of deconstructing language to get at meaning? How is meaning advanced by this kind of reading? Please give an example of what you mean.
d) Consider a character with which you can deeply identify. How does this character help you to act in your modern circumstances?
2) The Value of Literary Studies (graded) Consider the value of literature and discuss whether literature does or does not help you as a human being who must act in the world. Respond to one or more of the following questions: A. Does an understanding of literature make us better as people? Is that the point of literature? B. Should literature be taught differently in our grammar schools, high schools, and colleges? C. Should we be reading less and creating more? Does understanding what has c
Literature homework help
We can offer a similar ASSIGNMENT at a reasonable price. All our papers are written from the scratch and 100% plagiarism free. |
Nature of Vernalization Stimulus – Vernalin or Gibberellin
Many attempts were made to isolate vernalin but all failed. However, Anton Lang in 1957 showed that application of gibberellins to certain vernalization requiring biennials like henbane induce flowering in them without cold temperature treatment. Purvis in 1961 induced winter annuals to flowers induced by treating their seeds with gibberellins. It was also found that natural gibberellins are formed in greater amounts in vernalization requiring species when these are exposed to low temperatures, e.g., in Chrysanthmum and rudbeckia species. These results indicate that properties of gibberellins may be similar to those expected for vernalin.
But when gibberellins are applied to, vernalization requiring rosette plants, elongation of stem take place first followed by initiating of floral buds on those shoot. However, when plants are provided with cold temperature in order to induce flowering, the floral buds appeared as soon as the shoot begin to elongate (bolt). The results indicate that flowering response to applied gibberellins is not equal to flowering response induced naturally, by cold-temperature treatment.
Mikhail Chailakhyan (1968) in Russia suggested that there are two substances involved in flower formation, one a gibberellin or gibberellin-like substances and anthesin, a hypothetical substance like vernalin. According to him the plants that require low temperatures or long days or both might lack sufficient gibberellins until they have been exposed to inducing environment, i.e., to low temperatures, long day lengths or both; whereas short day plants might contain sufficient gibberellins bit lack anthesin.
Chailakhyan suggested that vernalization requiring plants produce vernalin when subjected to low temperatures and the vernalin is then converted to gibberellins in response to long days, at least in those plants that require exposure to long days after low temperature treatment.
Melchers grafting experiment with photo-inductive Maryland mammoth tobacco plant to anon-inductive henbane plant suggest the presence of two substances as proposed by Chailakhyan. Apparently, each contained one of the substances necessary for inducing flowering. Thus, they obtained these substances from each other when theses are grafted. Henbane was successful in getting the requiring substances whereas the tobacco was not.
However, in many vernalization requiring plants gibberellins failed to replace cold requirement for flowering. Later, on experimental evidence was provided suggesting that gibberellins are less important in flowering.
Factors or Conditions Necessary for Vernalization
Certain conditions or factors are necessary for vernalization. Since, vernalization is dependent on a sequence of steps leading to production of an active substance, the presence of these factors are necessary for the vernalization process. These factors also influence the efficiency of the vernalization process. Some of these important conditions are as followings:
Temperature and its Duration
Lang’s experiment with henbane showed relationship between temperature and time of exposure and the influence of this relationship on the efficiency of vernalization. He exposed vernalization requiring henbane to different temperatures from 3°C to 17°C for varying periods of time. The efficiency of low temperature to induce flowering was determined by the number of days the plant took after the treatment. Lang found that temperature range (3 to 17°C) is effective if the period of vernalization is 105 days. The flower initiation occurred in 8 days. But when the vernalization period was shortened to 15 days, different temperatures were effective differently. A temperature of 10°C initiating flowers in 23 days was most important. When the cold treatment period was extended to 42 days, the most effective temperature range was from 3 to 6 °C, requiring 10 days for flower initiation.
Hansel studied the effect of a wide range of temperature on Petkus rye. He found that vernalization fails before -4°C, from 1 to 7°C there is a shortening of number of days to flowering, and there is rapid fall in the rate of vernalization when the temperatures are increased from 7 to 15°C. Exposure to high temperature range result in devernalization.
The age of plant plays a pivotal role in its response vernalization. The age at which a plant is sensitive to vernalization differ from species. For example, in cereals germinating seeds and even embryo as can be receive the stimulus of vernalization, whereas certain biennials require a certain period of growth before responding to the stimulus of vernalization. For instance, Hyoscymus niger must have completed at least 10 days growth in the rosette stage, to be vernalized. The most effective time is when the plant is 30 days old. Other plants such as Oenothera can be vernalized only when at least six to eight leaves are present. The time when a plant is sensitive to vernalization is something referred to as ripeness-to-flower, a term originally used for photoperiod.
The need for a certain amount of vegetative growth to take place in plant so it becomes sensitive to low temperature suggests that some factor accumulates that perhaps receives vernalization stimulus. The substance may be synthesized during photosynthesis and in cases where seeds are vernalized during seedling stage this substance may already be present, either donated by the mother plant or synthesized during the development of the embryo.
An indication that this substance is produced during photosynthesis is provided during experiments with Arabidopsis thaliana. The seeds of this plants are most sensitive to vernalization and sensitivity decreases with the development of seedling. It reaches minimum when the seedling is two weeks old. But as the new leaves develop the sensitivity to low temperature increases once again. The loss of sensitivity may due to depletion of stored food of the seed and increase in sensitivity may be due to synthesis of carbohydrates as a result of photosynthesis. Evidence for the involvement of carbohydrates in vernalization has also been provided by Petkus winter rye embryos. When isolated embryos grown in a medium containing sucrose and minerals these proved to be sensitive to vernalization, but as the sucrose supplies are decreased the vernalization retarded.
Vernalization of dry seeds is impossible unless the seeds have imbibed some moisture.
Purvis pointed out that enough moisture must be present to initiate a small but visible degree of germination. She found that winter strain of Petkus rye must imbibe water 60% or 80% of the absolute dry weight for effective vernalization.
Experiments with grains showed that absence of oxygen make plants unresponsive to low temperature treatment, even if they are provided with adequate supply of water. It was also found that oxygen is also necessary for the vernalization of whole plants, such as henbane.
Both sugars and oxygen are required for the effect of low temperature. This requirement indicates the activation of some metabolic reaction essential for flowering. But in winter wheat (Triticum aestivum) formation of new proteins was found after vernalization. The protein pattern after vernalization resembles that of spring wheat does not require vernalization to flower. This suggests that proteins synthesis is another condition necessary for vernalization.
Nature of Vernalization Stimulus
Nature of Vernalization Stimulus – Vernalin or Gibberellin |
We are the fabulous fourth grade! Here is just snap shot of what they’ll learn this year. In math, students will learn about fractions and how to use the four operations to solve two-step word problems. In reading and writing, students will be required to summarize a text, find evidence-based answers to questions, determine the main idea, the theme and describe in depth a character, setting, or event in a story. They will also be researching a topic using non-fiction texts. Fourth Grade Social Studies and Science offers many imaginative adventures and experiments that students will discover throughout the year! |
How to convert megawatt hours to megawatts
A megawatt hour (MWh) is a unit of energy. It is a measure of the actual amount of power consumed or produced by one megawatt expended for a period of one hour. A megawatt (MW) is a unit of power. It describes the rate at which power is being consumed or produced by a circuit at any given moment in time.
A megawatt is equivalent to one million watts.
The formula used to calculate megawatt-hours is Megawatt hours (MWh) = Megawatts (MW) x Hours (h).
To convert megawatt hours to megawatts, you are going to need to divide the number of megawatt hours by the number of hours. In other words: Megawatts (MW) = Megawatt hours (MWh) / Hours (h).
- A megawatt hour (MWh) is a unit of energy.
Determine the number of megawatt hours.
Determine the number of hours. (To convert megawatt hours to megawatts, the number of hours must also be known.)
Divide the number of megawatt hours by the number of hours. The resulting number is the number of megawatts.
- "Wiring Simplified;" 40th Edition; H. P. Richter, W. C. Schwan, F. P. Hartwell; 2002
- For example: You have a high-powered lamp that has used 3 megawatt hours over the course of 6 hours. To figure out how many megawatts your lamp is, you would divide the number of megawatt hours (3) by the number of hours (6) to get 0.5 megawatts.
- Another example: You receive a power bill indicating that your household has used 1,000 megawatts during the previous month. The previous month has 31 days, which means that there were 744 hours in the month (31 days x 24 hours/day). To calculate the average number of megawatts present in all circuits in your house at any one given moment of time, you would divide the number of megawatt hours (1,000) by the number of hours (744) to get approximately 1.34 megawatts. This number would be an average because the amount of power being consumed by your household presumably fluctuates.
- Note that the number of megawatts used in the examples were selected solely for demonstration purposes, and not because they reflect the typical (or even possible) wattage of lamps.
Morgan Owens has a Bachelor of Science in criminal justice, and minors in biology and psychology. She attended Boston University and is currently applying to law school for matriculation in 2014. Her articles have been published on numerous informational websites. |
The basic principle behind every high-thrust interplanetary space probe is to accelerate briefly and then coast, following an elliptical, parabolic, or mildly hyperbolic solar trajectory to your destination, using gravity assists whenever possible. But this is very slow.
Imagine, for a moment, that we have a spacecraft that is capable of a constant 1g (“one gee” = 9.8 m/s2) acceleration. Your spacecraft accelerates for the first half of the journey, and then decelerates for the second half of the journey to allow an extended visit at your destination. A constant 1g acceleration would afford human occupants the comfort of an earthlike gravitational environment where you would not be weightless except during very brief periods during the mission. Granted such a rocket ship would require a tremendous source of power, far beyond what today’s chemical rockets can deliver, but the day will come—perhaps even in our lifetimes—when probes and people will routinely travel the solar system in just a few days. Journeys to the stars, however, will be much more difficult.
The key to tomorrow’s space propulsion systems will be fusion and, later, matter-antimatter annihilation. The fusion of hydrogen into helium provides energy E = 0.008 mc2. This may not seem like much energy, but when today’s technological hurdles are overcome, fusion reactors will produce far more energy in a manner far safer than today’s fission reactors. Matter-antimatter annihilation, on the other hand, completely converts mass into energy in the amount given by Einstein’s famous equation E = mc2. You cannot get any more energy than this out of any conceivable on-board power or propulsion system. Of course, no system is perfect, so there will be some losses that will reduce the efficiency of even the best fusion or matter-antimatter propulsion system by a few percent.
How long would it take to travel from Earth to the Moon or any of the planets in our solar system under constant 1g acceleration for the first half of the journey and constant 1g deceleration during the second half of the journey? Using the equations below, you can calculate this easily.
Keep in mind that under a constant 1g acceleration, your velocity quickly becomes so great that you can assume a straight-line trajectory from point a to point b anywhere in our solar system.
Maximum velocity is reached at the halfway point (when you stop accelerating and begin decelerating) and is given by
The energy per unit mass needed for the trip (one way) is then given by
How much fuel will you need for the journey?
hydrogen fusion into helium gives: Efusion = 0.008 mfuel c2
matter-antimatter annihilation gives: Eanti = mfuel c2
This assumes 100% of the fuel goes into propelling the spacecraft, but of course there will be energy losses and operational energy requirements which will require a greater amount of fuel than this. Moreover, we are here calculating the amount of fuel you’ll need for each kg of payload. We would need to use calculus to determine how much additional energy will be needed to accelerate the ever changing amount of fuel as well. The journey may well be analogous to the traveler not being able to carry enough water to survive crossing the desert on foot.
Now, let’s use the equations above for a journey to the nearest stars. There are currently 58 known stars within 15 light years. The nearest is the triple star system Alpha Centauri A & B and Proxima Centauri (4.3 ly), and the farthest is LHS 292 (14.9 ly).
I predict that interstellar travel will remain impractical until we figure out a way to harness the vacuum energy of spacetime itself. If we could extract energy from the medium through which we travel, we wouldn’t need to carry fuel onboard the spacecraft.
We already do something analogous to this when we perform a gravity assist maneuver. As the illustration below shows, the spacecraft “borrows” energy by infinitesimally slowing down the much more massive Jupiter in its orbit around the Sun and transferring that energy to the tiny spacecraft so that it speeds up and changes direction. When the spacecraft leaves the gravitational sphere of influence of Jupiter, it is traveling just as fast as it did when it entered it, but now the spacecraft is farther from the Sun and moving faster than it would have otherwise.
Of course, our spacecraft will be “in the middle of nowhere” traveling through interstellar space, but what if space itself has energy we can borrow? |
Velella is the common name of a specific genus of sea creature. There is only one known species within the genus and is a free-floating hydrozoan. They live on the surface of the ocean.
This animal is endemic to virtually all the seas and oceans of the world. Just as with the jellyfish, each Velella is, in fact, a hydroid colony. Adult individuals average roughly 2.8 in (7 cm) across.
They are carnivorous in nature and their primary diet consists of plankton.
Their movements are controlled by the ocean winds. These catch the sail-shaped fin on the surface and move them across the surface.
Velella Life Cycle
The Velella lives a bipartite life cycle. During their polyp stage, they possess the sail-shaped fin for which they are most popular. During this stage, the individual polyps comprising their bodies feed on plankton. The polyps are connected by tiny canals which permit the colony of creatures to share all food consumed.
All polyps of each Velella are also either all male or all female. These polyps serve highly specific functions within the whole – for example, protection, feeding, and reproduction.
The animal reproduces through asexual means. During the reproductive stage, each Velella will produce several thousand tiny medusae.
Velella Habitat and Distribution
This animal also lives in most oceans and seas but the majority prefer warmer climates.
They also dwell in the area of the interface between the ocean and the air. The polyps will dangle about 0.4 in (1 cm) below the surface. They are at the mercy of the winds for locomotion. On occasion, this will result in strandings upon beaches. Mass strandings are common along the northern coast of North America but below we add a photo we took of a stranding in the area of St. Tropez, France.
Since the Velella breed in such vast numbers, often these strandings will literally turn the coastline blue for more than 100 mi (160 km). |
Plato’s strategy is to first explain the concept of political justice and then that of individual justice. According to Socrates, The State is divided into three specific classes: Artisans, Soldiers, and Rulers. Artisans include members of the community whose function is to satisfy the material needs of the masses. They include businessmen, workers, and farmers. Solders consist of those whose function is to defend the community and therefore must possess the virtue of courage. Lastly, rulers are individuals whose function is to govern (Narveson, L4). They are said to possess the virtue of wisdom and they must not seek the fame of being a ruler. Artisans and Soldiers are required to be obedient to the Rulers. Socrates emphasizes that justice consists of having each class perform its own business and not interfere in the functions of its neighbours. Essentially, justice is a principle of specialization: a principle that requires each person to fulfill their societal role to which nature fitted him. In other words, a harmony amongst the classes.
Closer to the end of Book IV, Plato attempts to show that individual justice mirrors political justice. Socrates argues that humans often find themselves internally divided, being both impelled in a certain direction and simultaneously resisting this impulse. Socrates forms his theory of the soul on the principle that the same thing cannot go in two opposite directions at once and therefore, there must be different parts of the soul. Fundamentally, Socrates argues that the soul is also divided into three separate parts or classes. The first, is the “lowest” and includes the appetite, whose function is to satisfy bodily needs. It is the part of the soul that can be hungry for immoral gratification and has no rational consciousness in its desires. The second, or the “middle” encompasses the spirited element of the soul whose function is to control and tame the appetites. This part enables the soul to differentiate between good and bad. The “highest” is that of reason, whose function is to rule over the soul as a whole (Narveson, L4). The spirit naturally, if it has not been corrupted by bad upbringing, allies with the rational part.
Socrates sketches a psychological portrait of the tyrant in order to prove that injustice tortures a mans psyche, whereas a just soul is a happy, untroubled and calm (Plato, p.12). Based on this view, according to Plato, psychological health is detriment of justice. Socrates inferences suggest that internal peace is required if one is to get anything done. The psychologically unstable individual suffers from internal turmoil and conflict, thus reducing his efficiency. Personally, I disagree with this explanation of justice. I do not believe that an individual whose soul is “orderly” and “harmonious” in the sense that different parts of it are not going off in different directions, is always someone who is just in the sense of paying his debts, refraining from murder, and so on. It could be said that the unjust man’s soul will not be in harmony because his conscience will bother him. However, this is to say that he will recognize |
Navigation: Sortie 102-302
Navigation is defined as a field of study that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another. The field of navigation includes four general categories: land navigation, marine navigation, aeronautic navigation, and space navigation.
In this modern time of advanced Global Positioning System (GPS) – almost everyone has a handy and easy to use instrument that can tell where you are – and more importantly, where you want to go. Personal navigation has never been easier. But what came before GPS? And what would we do if GPS were lost – or worse – no longer available. Long before GPS, early explorers relied on the stars, moon, planets, and sun for navigation (Celestial navigation). Using the stars & sun to measure longitude and latitude, they were able to determine their approximate position and chart a course to their destination. As technology improved, radio beacons replaced stars as a primary navigation method. And then satellites eventually replaced radio beacons. But there has always remained a need for back-up navigation methods. |
Algorithms sometimes get a bad rap. Of course we don’t want our children to end up as “algorithmic thinkers” – only able to apply an algorithm without understanding. However, students should become used to algorithms… it is how much of the world works. Download a pdf here.
This algorithm is simple enough for kindergarten students. They should definitely play with it!
The patterns generated are complex enough for junior high students to tackle. They should play with it!
With K-8 students I always use the backdrop of King Kong rearranging a city skyline – That is a better theme than “Bulgarian Solitaire.”
Konstantin Oskolkov of the Steklov Mathematical Institute in Moscow was told about this puzzle by a stranger c.1980. It has become known as Bulgarian Solitaire.
Standards for Mathematical Practice
MathPickle puzzle and game designs engage a wide spectrum of student abilities while targeting the following Standards for Mathematical Practice:
MP1 Toughen up!
Students develop grit and resiliency in the face of nasty, thorny problems. It is the most sought after skill for our students.
MP2 Think abstractly!
Students take problems and reformat them mathematically. This is helpful because mathematics lets them use powerful operations like addition.
MP3 Work together!
Students discuss their strategies to collaboratively solve a problem and identify missteps in a failed solution. MathPickle recommends pairing up students for all its puzzles.
MP4 Model reality!
Students create a model that mimics the real world. Discoveries made by manipulating the model often hint at something in the real world.
MP5 Know the tools.
Students master the tools at their fingertips - whether it's a pencil or an online app.
MP6 Be precise!
Students learn to communicate using precise terminology. MathPickle encourages students not only to use the precise terms of others, but to invent and rigorously define their own terms.
MP7 Be observant!
Students learn to identify patterns. This is one of the things that the human brain does very well. We sometimes even identify patterns that don't really exist 😉
MP8 Be lazy!?!
Students learn to seek for shortcuts. Why would you want to add the numbers one through a hundred if you can find an easier way to do it? |
Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
Code generation is a mechanism where a compiler takes the source code as an input and converts it into machine code. This machine code is actually executed by the system. Code generation is generally considered the last phase of compilation, although there are multiple intermediate steps performed before the final executable is produced. These intermediate steps are used to perform optimization and other relevant processes.
The code generation process is performed by a component known as a code generator, part of the compiler program. The original source code of any program passes through multiple phases before the final executable is generated. This final executable code is actually the machine code, which computer systems can execute readily.
In the intermediate phases of compilation, code optimization rules are applied one at a time. Sometimes these optimization processes are dependent on each other, so they are applied one after another based on the dependency hierarchy. After passing multiple phases, a parse tree or an abstract syntax tree is generated and that is the input to the code generator. At this point, the code generator converts it into linear sequential instructions. After this stage, there may be some more steps depending upon the compiler. The final optimized code is the machine code for execution and output generation. |
Telling the time:
Telling the time - Year 2 children are aiming to tell the time when it is o'clock, half past, quarter to and quarter past on an analogue (not digital) clock. If your child can do this, the next level is telling the time at 5 minute intervals (5 past, 5 to etc). Use the second video clip in resources to help. If it is too tricky for your child, stick to o'clock, then half past to begin with.
Try to encourage neat handwriting and capital letters, finger spaces and full stops! They might enjoy putting the date in their work book (if they have theirs from school) before they start. |
Timeline of Ohio History
==Prehistoric Period: 13000 BC-1650 AD== *13000 BC-7000 BC: Paleoindian Period - Ohio's first human inhabitants *8000 BC-500 BC: Ohio's Archaic Period - Most native people lived as hunters and gatherers. *800 BC-1200 AD: Ohio's Woodland Period - The region's native populace increasingly relied on agriculture to sustain themselves. *800 BC-100 AD: The Adena Culture thrived in Ohio. *100 BC-400 AD: The Hopewell Culture built large mound complexes in Ohio, including the Newark Earthworks. *1000 AD-1650 AD: Ohio's Fort Ancient Culture flourished. *1000 AD-1650 AD: Ohio's Late Prehistoric Period. ==Exploration to Statehood: 1650-1803== *1492: Christopher Columbus rediscovered the Americas for Europeans, beginning the Colonial Period in the New World. *1607: The Virginia Company established Jamestown, the first British community to survive in North America. *1670: Rene Robert Cavelier, Sieur de La Salle, a French explorer and the first European in the Ohio Country, discovered the Ohio River. *1748: The Ohio Company formed in Virginia to settle the Ohio River Valley. *1754-1763: French and Indian War **France ceded all rights to the Treaty of Paris of 1763. *1768: Treaty of Fort Stanwix - Iroquois ceded all lands south and east of the Ohio River to the British. *1774: Lord Dunmore's War *1775-1783: The American Revolution **1775: Hostilities began with battles of Lexington and Concord in Massachussetts. **1776: Declaration of Independence. American colonies declared their independence from England. **1781: Final major battle fought at Yorktown, Virginia *1782: Gnadenhutten Massacre. Pennsylvania militia killed Christian Indians near Tuscarawas River. *1782: Colonel William Crawford burned at the stake in retaliation for the Gnadenhutten Massacre. *1783: Treaty of Paris officially ended the American Revolution. England recognized American independence and ceded all lands in the Ohio Country. *1785: Land Ordinance of 1785 established methods for surveying and dividing land in the Ohio Country. *1786: Ohio Company of Associates formed in Massachusetts to sell land in what is now southeast Ohio. *1787: Confederation Congress appointed Arthur St. Clair as first governor of the Northwest Territory. *1787: United States Constitution drafted. *1787: Congress enacted the Northwest Ordinance, establishing the Northwest Territory, which included modern-day Ohio. *1788: Marietta, the Northwest Territory's and Ohio's first permanent white settlement, founded. *1790-1794: Ohio Indian Wars **1790 - Harmar's Defeat **1791 - St. Clair's Defeat **1794 - Battle of Fallen Timbers *1795: Treaty of Greeneville signed, essentially ending the Ohio Indian Wars. *1800: Chillicothe became the capitol of the Northwest Territory. *1802: Enabling Act set the stage for Congress to admit Ohio to the Union. *1802: Constitutional Convention met at Chillicothe to draft Ohio's first constitution. *1802: Thomas Worthington presented Ohio constitution to Congress for approval. ==Early Statehood: 1803-1846== *1803: President Jefferson signed legislation making Ohio the seventeenth state of the Union. *1804: Ohio General Assembly chartered Ohio University. *1812: City of Columbus founded and named as new state capitol. *1812-1814: The War of 1812 **1813: British failure at both Sieges of Fort Meigs prevented them from driving their forces deeper into Ohio. *1813: Under the command of Oliver Hazard Perry, the American fleet defeated the British fleet at the Battle of Lake Erie, cutting off British supply lines and forcing them to abandon Fort Detroit. *1815: Ohio University held first commencement. *1816: State government relocated to Columbus. *1825: Work on the National Road and the canal system in Ohio began. *1833: Oberlin College, the first coeducational college in the United States founded. *1833: The Ohio and Erie Canal completed. *1835: Boundary dispute between Michigan and Ohio led to the Toledo War. *1840: Ohioan William Henry Harrison elected the 9th President of the United States. *1842: Treaty with the Wyandots (1842). The Wyandots, Ohio's last Indian tribe, agreed to relinquish all claims to land within the state. *1845: The Miami and Erie Canal completed. *1846-1848: Mexican War *1850: The first Ohio State Fair held in Cincinnati. *1850: Ohio's second Constitutional Convention held in Chillicothe. *1851: The Ohio Constitution of 1851 adopted. *1852: Publication of Uncle Tom's Cabin, written by in Ohio by Harriet Beecher Stowe, increased tensions between the North and the South.==American Civil War: 1860-1865== *1860: Abraham Lincoln elected president. *1861: The Ohio Statehouse completed. *1862: President Lincoln issued the Emancipation Proclamation. *1863: Following a six-week siege, Confederate forces surrendered Vicksburg to Federal troops led by Ohioan Ulysses S. Grant. *1863: Morgan's Raid. Confederate Brigadier General John Hunt Morgan led troops on a daring raid across southern Ohio. *1863: Battle of Buffington Island,the only Civil War battle fought on Ohio soil. *1864: President Lincoln promoted Ulysses S. Grant to supreme commander of all Union forces. *1864: Union forces under the leadership of Ohioan William T. Sherman captured Atlanta. *1864: Sherman led his troops on his "March to the Sea" from Atlanta to Savannah, Georgia. *1864-1865: Ulysses S. Grant led the Army of the Potomac in pursuit of Robert E. Lee's forces during the Wilderness Campaign. *1865: Robert E. Lee surrendered the Army of Northern Virginia to Ulysses S. Grant at Appomattox Court House, Virginia. *1865: John Wilkes Booth assassinated President Lincoln at Ford's Theatre in Washington, D.C. ==Industrialization and Urbanization: 1866-1900==*1868: Ohioan Ulysses S. Grant elected to his first of two terms as the 18th President of the United States. *1869: The Cincinnati Redstockings, the first professional baseball team, founded. *1870: Originally known as the Ohio Agricultural and Mechanical College, The Ohio State University chartered by the Ohio General Assembly. *1876: Ohioan Rutherford B. Hayes elected the 19th President of the United States. *1876: The Ashtabula Train Disaster resulted in 83 deaths. *1879: Ohioan Thomas Edison invented the electric light bulb. *1880: Ohioan James A. Garfield elected the 20th President of the United States. *1881: Charles Guiteau shot President Garfield, who died three months later, in Washington, D.C. *1884: Cincinnati Courthouse Riot brought destruction and death to Cincinnati. *1888: Ohioan Benjamin Harrison elected the 23rd President of the United States. *1894: The first gasoline-powered automobile in the U.S. invented. *1896: Ohioan William McKinley elected to his first of two terms as the 25th President of the United States. ==The Progressive Era: 1901-1928==*1901: Leon Czolgosz assassinated President McKinley in Buffalo, New York. *1902: The Ohio General Assembly officially adopted Ohio's state flag. *1903: Ohioans Orville and Wilbur Wright complete the first successful flight of a powered airplane.*1908: Ohioan William H. Taft elected the 27th President of the United States.*1908: The Collinwood School Fire, near Cleveland, killed 173 pupils, two teachers, and one firefighter.*1913: The Flood of 1913 caused the death of at least 428 Ohioans and brought destruction across the state.*1914-1918: World War I **1917: Camp Sherman, near Chillicothe, contructed to train World War I army troops. **1918: Aproximately 1,200 troops died at Camp Sherman during the Influenza Epidemic of 1918. *1920: Ohioan Warren G. Harding elected the 29th President of the United States.*1925: The dirigible Shenandoah crashed near Ava, Ohio, killing 14 people. ==The Great Depression and World War II: 1929-1945==*1929-1941: The Great Depression*1930: The Ohio Penitentiary Fire killed 322 inmates.*1935: Ohio's first sales tax instituted.*1937: The Ohio River Flood of 1937 brought devastation to southern Ohio.*1944: The East Ohio Gas Company Explosion in Cleveland killed 131 people.*1941-1945: The World War II entangled the United States and Ohio.*1949: The General Assembly created the Ohio Department of Natural Resources. ==The Cold War and the Civil Rights Movement: 1946-1975==*1950-1953: The Korean War*1953: Correcting an oversight, Congress passed a resolution officially recognizing Ohio statehood and declaring Ohio's date of entry into the Union as March 1, 1803.*1955: The Ohio Turnpike completed.*1958: Completion of the St. Lawrence Seaway connected Ohio cities on Lake Erie to international trade opportunities. *1959: The General Assembly created the Ohio Civil Rights Commission.*1962: Ohioan John Glenn became the first American to orbit the earth.*1963: William O. Walker became Ohio's first African-American cabinet member.*1963: The Professional Football Hall of Fame opened in Canton.*1964-1974: Vietnam War*1967: Carl Stokes became the first African-American mayor of a major city (Cleveland).*1969: Ohioan Neil Armstrong became the first person to walk on the moon.*1970: Kent State Riot - Four students killed by Ohio National Guard during anti-war protests.*1972: The General Assembly enacted the Ohio Income Tax.*1973: Ohio voters approved the Ohio Lottery.*1974: The deadly Xenia Tornado killed 33 people. ==Modern Ohio==*1978: The Blizzard of 1978, the worst winter storm in Ohio's history. *1979: Public schools in Columbus, Dayton and Cleveland began busing pupils to eliminate segregation.*1986: Astronaut Judith Resnik, of Akron, died in the Challenger space shuttle explosion. *1991: In their ruling in DeRolph v. State of Ohio, the Ohio Supreme Court declared Ohio's school funding program unconstitutional. *1993: The Lucasville Prison Riot resulted in the deaths of nine prisoners and one guard. *1995: The Rock and Roll Hall of Fame opened in Cleveland. *1995: International negotiations at Wright-Patterson Air Force Base near Dayton led to the Bosnian Peace Agreement.*1998: Astronaut John Glenn became the oldest man to travel into space.*2001: Terrorist attacks on September 11 in New York City, NY and Washington, D.C., led to a flurry of anti-terrorist activities in Ohio and throughout the nation.*2006: Ohio voters passed a statewide smoking ban in public places. |
What is Anemia in Dogs | Anemia Definition
The medical term anemia refers to a reduced number of healthy red blood cells or the levels of hemoglobin in circulation within the dog’s bloodstream. Many pet owners may assume that anemia is a specific disease in which it would have a definite diagnosis and treatment plan. However, this isn’t necessarily the case. In fact, anemia is most often a symptom of an underlying condition.
If a dog suffers from a lack of healthy red blood cells (or anemia), doctors refer to them as being anemic.
There is a wide array of underlying conditions and diseases that can cause anemia to develop.
The most common causes include:
- Blood loss from trauma which causes internal or external bleeding
- Autoimmune diseases, particularly immune-mediated hemolytic anemia (IMHA or AIHA)
- Tick-borne diseases
- Infectious diseases
- Gastrointestinal bleeding
- Kidney disease
- Bone marrow disease
- Cushing’s Disease
- Blood loss due to a parasitic infestation
- Chronic diseases that affect the production of red blood cells
- Medications that interfere with red blood cell production
- Poisons or toxins (such as ingesting rat poison or lead poisoning)
- Nutritional imbalances or poor nutrition
Additionally, there are certain breeds of dogs that may be at a higher risk of developing anemia due to being at a predisposition for developing specific diseases. Therefore, dog owners should speak to their veterinarian regarding conditions and warning signs that they should look out for based on their dog’s individual needs.
Anemia Symptoms | Anemic Symptoms
Anemia is typically a symptom of an underlying condition. In fact, it may be the only symptom or one of
many depending on the specific ailment. Anemic dogs will often display clinical signs such as lethargy and fatigue. Your dog may tire more quickly than usual during playtime. This weakness is due to the fact that red blood cells carry oxygen which is necessary for all basic bodily functions. Furthermore, when there is a reduction in the number of healthy blood cells, even the simplest of tasks can prove to be exhausting for your pup.
Additionally, black-tarry stool known as melena is another clinical sign of anemia. The black-tarry stool (which is actually dark blood) is often paired with blood in the vomit. Both of these are serious warning signs of anemia.
While exhaustion is a clinical sign of anemia, it is non-specific, meaning that it is also a symptom of many other diseases. Perhaps the most common
symptom of anemia is pale or white gums.
If a dog owner believes that their beloved pup may be anemic, the first thing they can do is check their gums. Healthy gums are pink in color. An anemic dog will have gums that are either very pale pink or even white in color.
Severe Anemia Symptoms
Typically, dog owners won’t have to second guess whether their pup has severe anemia. This type of anemia often results from serious trauma and the blood loss is very apparent. Often times severe anemia will require a blood transfusion. It is imperative for a dog with severe anemia (or a dog with any amount of blood loss) to see a veterinarian immediately.
What is a Normal RBC Count
Red blood cells should account for approximately 35%-55% of the dog’s blood.
What is a Low Red Blood Cell Count | Low RBC
Therefore, if the dog’s RBC is lower than 35% of the total blood count than they are considered to be anemic.
There are several tests that a veterinarian will perform in order to accurately diagnose anemia. The most common test is the packed cell volume (PCV) or hematocrit (HCT) test.
HCT Blood Test (Hematocrit Blood Test)
Your veterinarian will administer the PCV or HCT test as apart of the complete blood cell (CBC) test. The hematocrit test involves processing the blood sample by using what is known as a centrifuge. In the centrifuge, the red blood cells are separated from the plasma.
Once the red blood cells are separated, the technicians will be able to determine what percentage of the packed cell volume (PCV) is made up of red blood cells. As previously stated, is the percentage of healthy red blood cells is less than 35%, the dog is considered to be anemic.
A hematocrit is an instrument that measures the ratio of the volume of red blood cells to the total volume of blood.
Furthermore, a breakdown of the word comes from late 19th century: from hemato- ‘of blood’ + Greek kritēs, meaning ‘judge.’
RBC Blood Test and Hemoglobin Count
Your veterinarian may also administer an RBC blood test and a hemoglobin count during the CBC test in order to make a definite diagnosis.
Additional Tests for Diagnosing Anemia
In addition to the diagnostics performed during the complete blood count, there are several other important tests that veterinarians will often recommend.
A Blood Smear
A blood smear test will allow your veterinarian to determine whether a parasitic infection is causing the dog to be anemic. Additionally, the veterinarian can also use the blood smear test in order to reveal abnormal cell production such as a drastic increase of white blood cells. An increase of white cells is often an indicator of leukemia.
Bone Marrow Biopsy
A bone marrow biopsy will allow the vet to determine if the anemia is responsive or unresponsive (also known as non-regenerative anemia).
If the anemia is responsive, it means that your dog’s body is actively trying to reverse the anemia. It will typically do this by releasing immature red blood cells into the bloodstream in an attempt to correct the deficiency of healthy red blood cells. These immature red blood cells are known as leukocytes. Leukocytes can also be seen on a blood smear test and are a positive indicator that the anemia is responsive.
Conversely, if the anemia is unresponsive (or non-regenerative anemia), the bone marrow has not detected and is not countering the abnormality in the blood deficiency. This can present additional, serious issues dependent on what ailment is responsible for the anemia.
Additionally, your veterinarian may recommend testing for biochemical profiles, a urinalysis, and a fecal parasite exam if the results are still unclear.
Biochemical profiles and a urinalysis will provide essential information on your dog’s organ function and electrolyte levels. A fecal parasite exam will allow the veterinarian to determine whether an intestinal parasitic infestation is causing the reduction in healthy red blood cells.
Different Types of Anemia
Through the necessary testing, your veterinarian will also be able to distinguish the type of anemia that your pup is facing.
For instance, anemia due to an iron deficiency is quite common among women and also exists in dogs (although not nearly as common). An iron deficiency in dogs is usually seen only secondary to some kind of chronic blood loss. In some cases, iron deficiencies are diagnosed in puppies with severe hookworm infections or puppies with serious dietary nutritional deficiencies.
Autoimmune Hemolytic Anemia
One type of common canine anemia is autoimmune hemolytic anemia. In this particular form of anemia, the dog’s body destroys its own red blood cells.
With today’s technological advancements, your vet will be able to provide you with a better understanding of your dog’s red blood cell deficiencies.
Treating anemia will vary based on the severity. For instance, in severe cases of anemia, a blood transfusion will be necessary. The primary reason for the blood transfusion is to keep the dog stable. The dog’s stability is crucial while determining the underlying cause of the anemia.
In all cases, anemia treatment will ultimately be based on the underlying condition.
Potential treatment plans include:
- Anthelmintics (de-worming medications)
(Just to name a few)
Because there is such a wide range of ailments that can cause anemia, it is imperative that your veterinarian diagnoses the underlying condition first and then begins an effective, efficient treatment plan.
Severe cases of anemia can be life-threatening. If dog owners recognize any clinical signs of anemia they should not wait to get their pup medical treatment. A timely diagnosis can be the difference between life and death in some cases of anemia in dogs.
A Final Thought on Anemia in Dogs
While severe anemia can be fatal, generally the prognosis for dogs with anemia tends to be positive. However, it is imperative that the underlying cause is promptly diagnosed. Ultimately, a definite diagnosis and prognosis will ultimately vary based on the underlying cause.
We know how scary it is to hear that there may be something wrong with your dog’s blood levels. However, by knowing the signs of anemia, pet owners can ensure that if a problem arises, they can act quickly and efficiently and get their precious fur baby back on their feet in no time. |
The simplest example of an expression is a simple constant, such as:
"Finn and the Fianna" 457.50 '21/2/68' #7000000
There are four different kinds of constants —string, numeric, date and hexadecimal constants. These are shown respectively above. Notice that a string constant must be enclosed in double quotation marks (or inch symbols), a date constant is enclosed in single quotation marks (or foot symbols), and a hexadecimal constant starts with a # symbol. If the string constant were not enclosed in quotes, the expression parser would treat each word as if it were an identifier.
To include a character in a string constant which you could not otherwise include, you need to use an escape sequence. An escape sequence consists of a pair of characters made from the escape character \ (backslash), followed by the escape metacharacter for the character that you want. The metacharacters used by MoneyWorks are " (quote or inch symbol), t, n and \. Their use is shown in the table below:
|quote||\"||"He said \"Fiddlesticks\""|
|newline||\n||"This is line 1\nThis is line 2"|
|return||\r||"The \r of the native"|
|backslash||\\||"The backslash \\ is a special character"|
As an example, to replace all commas in a string with newlines, you would use an expression such as:
Replace(theString, ",", "\n")
As well as the double quotation marks for delimiting strings, you can use the backquote, e.g.
Replace(theString, `,`, "\n")
Having two string delimiters means that you don’t have to use the escape sequence for expressions that involved embedded strings.
Note: Don't confuse the backquote ` with the apostrophe ' — the latter is used to delimit dates. |
What were the Volhynian Massacres?
The Volhynian massacres were anti-Polish genocidal ethnic cleansings conducted by Ukrainian nationalists. The massacres took place within Poland’s borders as of the outbreak of WWII, and not only in Volhynia, but also in other areas with a mixed Polish-Ukrainian population, especially the Lvov, Tarnopol, and Stanisławów voivodeships (that is, in Eastern Galicia), as well as in some voivodeships bordering on Volhynia (the western part of the Lublin Voivodeship and the northern part of the Polesie Voivodeship – see map). The time frame of these massacres was 1943−1945. The perpetrators were the Organization of Ukrainian Nationalists−Bandera faction (OUN-B) and its military wing, called the Ukrainian Insurgent Army (UPA). Their documents show that the planned extermination of the Polish population was called an “anti-Polish operation.”
Controverises about the Volhynian Massacres
The anti-Polish drive of the pro-Bandera Ukrainian underground during World War II, together with the subsequent Polish retaliation it largely spawned, undoubtedly mark the bloodiest period of the Polish-Ukrainian conflict in the 1940s. This conflict raged in territories which were within Poland’s interwar borders (basically, the country’s south-east), and which, taken as a whole, had nearly co-equal Polish and Ukrainian populations. We use the word “conflict” because there was obvious antagonism between Poles and Ukrainians, and they waged a fight for land – even though they had been citizens of the same state (the Second Republic of Poland, 1918-1939). “Conflict” is thus one of the terms used to describe what happened between Poles and Ukrainians during World War II.
Genocide is a legal category. The Volhynian massacres have all the traits of genocide listed in the 1948 UN Convention on the Prevention and Punishment of the Crime of Genocide, which defines genocide as an act “committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such.” In Polish academia the Volhynian massacres are referred to as genocidal ethnic cleansings, the Volhynian (or Volhynian-Galician) slaughter, or, in legal terminology, the crime of genocide.
The Righteous Ukrainians
According to a range of testimonies, many Ukrainians helped their Polish neighbors whose lives were in danger. That help assumed the following forms: warnings about attacks; showing an escape route during an attack; sheltering Poles before an expected attack; misleading the attackers; the provision of first aid to wounded Poles; the provision of food or clothing to survivors; taking care of orphans and children lost after attacks (... |
Flu Shot Side Effects
Many people worry about side effects from the flu shot, but serious complications are rare. Some people believe that they can actually get the flu from receiving the shot, but this is not the case. For the majority of people, the risks of developing the flu are far greater than any risks associated with the vaccine.
What is flu (influenza)?
Influenza, commonly called "the flu," is an illness caused by RNA viruses
(Orthomyxoviridae family) that infect the respiratory tract of many animals, birds, and humans. In most people, the infection results in the person getting a fever, cough, headache, and malaise (tired, no energy); some people also may develop a sore throat, nausea, vomiting, and diarrhea. The majority of individuals has flu symptoms for about
1-2 weeks and then recovers with no problems. However, compared with most other viral respiratory infections, such as the common cold, influenza (flu) infection can cause a more severe illness with a mortality rate (death rate) of about 0.1% of people infected with the virus.
The above is the usual situation for the yearly occurring "conventional" or "seasonal" flu strains. However, there are situations in which some flu outbreaks are severe. These severe outbreaks occur when a portion of the human population is exposed to a flu strain against which the population has little or no immunity because the virus has become altered in a significant way. These outbreaks are usually termed epidemics. Unusually severe worldwide outbreaks (pandemics) have occurred several times in the last hundred years since influenza virus was identified in 1933. By an examination of preserved tissue, the worst influenza pandemic (also termed the Spanish flu or Spanish influenza) occurred in 1918 when the virus caused between 40-100 million deaths worldwide, with a mortality rate estimated to range from 2%-20%.
In April 2009, a new influenza strain against which the world population has little or no immunity was isolated from humans in Mexico. It quickly spread throughout the world so fast that the WHO declared this new flu strain (first termed novel H1N1 influenza A swine flu, often later shortened to H1N1 or swine flu) as the cause of a pandemic on June 11, 2009. This was the first declared flu pandemic in 41 years. Fortunately, there was a worldwide response that included vaccine production, good hygiene practices (especially hand washing), and the virus (H1N1) caused far less morbidity and mortality than was expected and predicted. The WHO declared the pandemic's end on Aug. 10, 2010, because it no longer fit into the WHO's criteria for a pandemic.
Researchers identified a new influenza-related
viral strain, H3N2, in 2011, but this strain has caused only about 330 infections with one death in the U.S.
Since 2003, researchers identified another strain, H5N1, a bird flu virus, that caused about 650 human infections.
This virus has not been detected in the U.S. and easily spreads among people in contrast to other flu strains. Unfortunately, people infected with H5N1 have a high death rate (about 60% of infected people die).
Currently, H5N1 does not readily transfer from person to person like other flu viruses.
The most recent data for the mortality (death rates) from influenza rate (death rate) for the United States in 2016 indicates that mortality from influenza varies from year to year. Death rates estimated by the CDC range from about 12,000 during 2011-2012 to 56,000 during 2012-2013. In the 2017-2018 season, deaths reached a new high of about 79,000. The CDC estimates between 24,000-62,000 deaths occurred in the 2019-2020 flu season. Experts suggest that a large percentage of people went unvaccinated or refused to vaccinate family members, causing the increased the number of deaths due to the flu.
is a bacterium incorrectly considered to cause the flu until the virus was demonstrated to be the correct cause in 1933. This bacterium can cause lung infections in infants and young children, and it occasionally causes ear, eye, sinus, joint, and a few other infections, but it does not cause the flu.
Another confusing term is stomach flu. This term refers to a gastrointestinal tract infection, not a respiratory infection like influenza (flu).
Influenza viruses do not cause the stomach flu (gastroenteritis). Another name problem is with the condition called swine flu.
Swine flu is a flu-like illness that usually infects pigs, but the term swine
flu was applied to a flu strain that also could infect humans (H1N1). In 2018-19, the pig version of the virus (not infecting humans to date) killed the majority of pigs in China, forcing that country to begin to utilize its emergency stockpile of pork. The viral strain has now been detected in South Korea.
Although initially symptoms of influenza may mimic those of a cold, influenza is more debilitating with symptoms of fatigue, fever, and respiratory congestion. Colds can be caused by over 100 different virus types, but only influenza viruses (and subtypes) A, B, and C cause the flu. In addition, colds do not lead to life-threatening illnesses like pneumonia, but severe infections with influenza viruses can lead to pneumonia or even death.
See a medical illustration of the the influenza virus plus our entire medical gallery of human anatomy and physiology
Flu vs. cold
Compared with most other viral respiratory infections, such as the common cold,
influenza (flu) infection usually causes a more severe illness with a mortality
rate (death rate) of about 0.1% of people infected with the virus. Cold symptoms (for example, sore throat, runny nose, cough (with possible phlegm production), congestion, and slight fever) are similar to flu symptoms, but the flu symptoms are more severe, last longer, and may include vomiting, diarrhea, and cough that is often a dry cough.
The following table
from the CDC helps to distinguish between a cold and influenza:
|Signs and Symptoms||Influenza||Cold|
|Fever||Usual; lasts 3-4 days||Rare|
|Aches||Usual; often severe||Slight|
|Chest discomfort, cough||Common; can be severe||Mild to moderate; hacking cough|
Flu vs. food poisoning
Although some of the symptoms of influenza may mimic those of food poisoning, others do not. Most symptoms of food poisoning include nausea, vomiting, watery diarrhea, abdominal pain, cramps, and fever. Note that the majority of food poisoning symptoms are related to the gastrointestinal tract, except for fever. The common flu signs and symptoms include fever but also include symptoms that are not typical for food poisoning, because the flu is a respiratory disease. Consequently, respiratory symptoms of nasal congestion, dry cough, and some breathing problems help distinguish the flu from food poisoning.
What are the causes of the flu (influenza)?
Influenza virus information
Influenza viruses cause the flu and are divided into three types, designated A, B, and C. Influenza A and influenza B are responsible for epidemics of respiratory illness that occur almost every winter and are often associated with increased rates of hospitalization and death. Influenza type C differs from types A and B in some important ways. Type C infection usually causes either a very mild respiratory illness or no symptoms at all.
It does not cause epidemics and does not have the severe public health impact of influenza types A and B. Efforts to control the impact of influenza are aimed at types A and B, and the remainder of this discussion will be devoted only to these two types.
Influenza viruses continually change over time, usually by mutation (change in the viral RNA). This constant changing often enables the virus to evade the immune system of the host (humans, birds, and other animals) so that the host is susceptible to changing influenza virus infections throughout life. This process works as follows: A host infected with influenza virus develops antibodies against that virus; as the virus changes, the "first" antibody no longer recognizes the "newer" virus and infection can occur because the host does not recognize the new flu virus as a problem until the infection is well under way. The first antibody developed may provide partial protection against infection with a new influenza virus. In 2009, almost all individuals had no antibodies that could recognize the novel H1N1 virus immediately.
Type A viruses are divided into subtypes or strains based on differences in two viral surface proteins called the hemagglutinin (H) and the neuraminidase (N). There are at least 16 known H subtypes and nine known N subtypes. These surface proteins can occur in many combinations. When spread by droplets or direct contact, the virus, if not killed by the host's immune system, replicates in the respiratory tract and damages host cells. In people who are immune compromised (for example, pregnant women, infants, cancer patients, asthma patients, people with pulmonary disease, and many others), the virus can cause viral pneumonia or stress the individual's system to make them more susceptible to bacterial infections, especially bacterial pneumonia. Both pneumonia types, viral and bacterial, can cause severe disease and sometimes death.
Antigenic shift and drift
Figure 2. An example of influenza antigenic shift and drift
Influenza type A viruses undergo two major kinds of changes. One is a series of mutations that occurs over time and causes a gradual evolution of the virus. This is called antigenic "drift." The other kind of change is an abrupt change in the hemagglutinin and/or the neuraminidase proteins. This is called antigenic "shift." In this case, a new subtype of the virus suddenly emerges. Type A viruses undergo both kinds of changes; influenza type B viruses change only by the more gradual process of antigenic drift and therefore do not cause pandemics.
The 2009 pandemic-causing H1N1 virus was a classic example of antigenic shift. Research showed that novel H1N1 swine flu has an RNA genome that contains five RNA strands derived from various swine flu strains, two RNA strands from bird flu (also termed avian flu) strains, and only one RNA strand from human flu strains. According to the CDC, mainly antigenic shifts over about 20 years led to the development of novel H1N1 flu virus. A diagram that illustrates both antigenic shift and drift (see Figure 2) and features influenza A types H1N1 and bird flu (H5N1), but almost every influenza A viral strain can go through these processes that changes the viral RNA. A recent flu epidemic in India was partially blamed on antigenic drift/shift.
Which illness is known as a viral upper respiratory tract infection?
When does flu season begin and end?
Flu season officially begins in October of each year and extends to May of the following year. According to the CDC, people can follow the development of flu across the United States by following CDC's weekly update of the locations where flu is developing in the U.S. (see the flu map).
What are flu (influenza) symptoms in adults and in children?
Typical clinical symptoms of the flu may include
- fever (usually 100 F-103 F in adults and often even higher in children, sometimes with facial flushing and/or sweating),
- respiratory symptoms such as
- cough (more often in adults),
- sore throat (more often in adults),
- runny or stuffy nose (congestion, especially in children),
- muscle aches (body aches), and
- fatigue, sometimes extreme.
Although appetite loss, nausea, vomiting, and diarrhea can sometimes accompany influenza infection, especially in children, gastrointestinal symptoms are rarely prominent. The term "stomach flu" is a misnomer that
some people use to describe gastrointestinal illnesses caused by other microorganisms. H1N1 infections, however, caused more nausea, vomiting, and diarrhea than the conventional (seasonal) flu viruses. Depending upon the severity of the infection, some patients can develop swollen lymph nodes, muscle pain, shortness of breath, severe headaches, chest pain or chest discomfort, dehydration, and even death.
Most individuals who contract influenza recover in a week or two, however, others develop potentially life-threatening complications like pneumonia. In an average year, influenza is associated with about 36,000 deaths nationwide and many more hospitalizations. Flu-related complications can occur at any age; however, the elderly and people with chronic health problems are much more likely to develop serious complications after the conventional influenza infections than are younger, healthier people. When people ignore or refuse flu vaccination, the death rate increases as shown by the recent higher death rates.
Subscribe to MedicineNet's General Health Newsletter
Influenza A virus information
As mentioned previously, has hemagglutinin on the viral surface. The viral hemagglutinins have at least 18 types, but these types are broken into two main influenza A virus categories. For example, one of the two main categories include human H1, H2, and avian H5 viruses while the other major category includes human H3 and avian H7 viruses. Researchers in 2016 at UCLA and the University of Arizona discovered that if you were exposed to one of these groups as a child, you had a much better chance of being protected against other viruses in that same group or category later in life. For example, if you are exposed to H2 as a child and then later in life to H2 or H5 viruses, you may have as high as a 75% chance of protection against those H2 and/or H5 strains.
But if you are exposed to the other major category that included H3 or H7, you would be much more susceptible to these viral types. The reverse situation would be true if you were exposed as a child to H3 or H7 viruses. The researchers concluded that the immunological imprinting early in life helps determine the response (immune response) to these viral types or categories. Consequently, the first strain of flu that a person is exposed to in childhood likely determines that person's risk in the future for severity of the flu depending upon the exact category of the first viral strain that infects the child. The researchers hope to exploit these new findings in the development of new and more effective flu vaccines.
What is the incubation period for the flu?
Incubation period for the flu, which means the time from exposure to the flu virus until initial symptoms develop, typically is
1-4 days with an average incubation period of 2 days.
How long is the flu contagious?
The flu is typically contagious about 24-48 hours before symptoms appear (from about the last day of the incubation period) and in normal healthy adults is contagious for another 5-7 days. Children are usually contagious for a little while longer (about 7-10 days). Individuals with severe infections may be contagious as long as symptoms last (about 7-14 days).
Health Solutions From Our Sponsors
How long does the flu last?
In adults, flu symptoms usually last about 5-7 days, but in children, the symptoms may last longer (about 7-10 days). However, some symptoms such as weakness and fatigue may gradually wane over several weeks.
How do health care professionals diagnose the flu (influenza)?
Medical professionals clinically diagnose the flu by evaluating the patient's history of association with people known to have the disease and their symptoms listed above. Usually, a
health care professional performs a quick test (for example, nasopharyngeal swab sample) to see if the patient
has an influenza A or B viral infection. Most of the tests can distinguish between A and B types. The test can be negative (no flu infection) or positive for types A or B. If it is positive for type A, the person could have a conventional flu strain or a potentially more aggressive strain such as H1N1. Most of the rapid tests are based on PCR technology that identifies the genetic material of the virus. Some rapid influenza diagnostic tests (RIDTs) can screen for influenza in about 10-30 minutes.
Swine flu (H1N1) and other influenza strains like bird flu or H3N2 are definitively diagnosed by identifying the particular surface proteins or genetic material associated with the virus strain. In general, this testing is done in a specialized laboratory. However, doctors' offices are able to send specimens to specialized laboratories if necessary.
How does flu spread?
How can you get influenza?
Flu easily spreads from person to person both directly and indirectly. Human-to-human flu transmission occurs via droplets contaminated with the virus. Produced by coughing, sneezing, or even talking, these droplets land near or in the mouth or the nose of uninfected people, and the disease may spread to them. The disease can spread indirectly to others if contaminated droplets land on utensils, dishes, clothing, or almost any surface
that uninfected people then touch. If the infected person touches their nose or mouth, for example, they transfer or spread the disease to themselves or others.
Cold, Flu, and Cough: 13 Foods to Eat When Fighting the Flu
What is the key to flu (influenza) prevention?
Annual influenza vaccination can prevent most of the illness and death that influenza causes. The CDC's current Advisory Committee on Immunization Practices (ACIP) issued recommendations for everyone 6 months of age and older, who do not have any contraindications to vaccination, to receive a flu vaccine each year.
Flu vaccine (influenza vaccine made from inactivated and sometimes attenuated [noninfective] virus or virus components) is specifically recommended for those who are at high risk for developing serious complications
from influenza infection.
Other simple hygiene methods can reduce or prevent some individuals from getting the flu. For example, avoiding kissing, handshakes, and sharing drinks or food with infected people and avoiding touching surfaces like sinks and other items handled by individuals with the flu are good preventive measures. Washing one's hands with soap and water or by using an alcohol-based hand sanitizer frequently during the day may help prevent the infection. Individuals with the flu should avoid coughing or sneezing on uninfected people; quick hugs are probably okay as long as there is no contact with mucosal surfaces and/or droplets that may contain the virus. Wearing a mask may help reduce your chances of getting the disease, and if you unknowingly or know you have the infection, help to reduce spreading it to others.
Are there any nasal spray vaccine or flu shot side effects in adults or in children?
Although annual influenza (injectable) vaccination has long been recommended for people in the high-risk groups, many still do not receive the vaccine, often because of their concern about side effects. They mistakenly perceive influenza as merely a nuisance and believe that the vaccine causes unpleasant side effects or that it may even cause the flu. The truth is that influenza vaccine causes no side effects in most people. In the past, patients with egg allergy had restrictions on getting the vaccine. However, extensive research has indicated that there is not enough egg protein in the vaccine to trigger an immune response, and all the recommendations about allergies to eggs has been dropped for the 2018-2019 flu season by several organizations that regulate vaccines.
The vaccine is not recommended while individuals have active infections or active diseases of the nervous system. Less than one-third of those who receive the vaccine have some soreness at the vaccination site, and about 5%-10% experience mild side effects, such as headache, low-grade fever, or muscle cramps, for about a day after vaccination; some may develop swollen lymph nodes. These side effects are most likely to occur in children who have not been exposed to the influenza virus in the past. The intradermal shots reportedly have similar side effects as the IM shot but are less intense and may not last as long as the IM shot.
Nevertheless, some older people remember earlier influenza vaccines that did, in fact, produce more unpleasant side effects. Vaccines produced from the 1940s to the mid-1960s were not as highly purified as modern influenza vaccines, and it was these impurities that caused most of the side effects. Since the side effects associated with these early vaccines, such as fever, headache, muscle aches, and/or fatigue and malaise, were similar to some of the symptoms of influenza, people believed that the vaccine had caused them to get the flu. However, injectable influenza vaccine produced in the United States has never been capable of causing influenza because it consists of killed virus.
Another type of influenza vaccine (nasal spray) is made with live attenuated (altered) influenza viruses (LAIV). This vaccine is made with live viruses that can stimulate the immune response enough to confer immunity but do not cause classic influenza symptoms (in most instances). The nasal spray vaccine (FluMist) was only previously approved for healthy individuals ages 2-49 years of age and was recommended preferentially for healthy children aged 2 through 8 who did not have contraindications to receiving the vaccine. However, this season, the CDC and others report there is no preference expressed for any vaccine over another. The American Academy of Pediatrics (AAP) recommends that all children 6 months and older receive a seasonal flu vaccine (some children under the age of 9 will need 2 doses). AAP and others recommend both inactivated influenza vaccines (IIV) and live attenuated influenza vaccine (LAIV) as vaccine options for the 2020-2021 season with no preference for any vaccine type. However, FluMist, a live attenuated vaccine, is recommended for ages 2-49 only. For additional information about vaccines, see https://www.aap.org/en-us/advocacy-and-policy/aap-health-initiatives/immunizations/Influenza-Implementation-Guidance/Pages/Annual-AAP-Influenza-Policy.aspx or your child's doctor. This nasal spray vaccine contains live attenuated virus (less able to cause flu symptoms due to a designed inability to replicate at normal body temperatures). Side effects of the nasal spray vaccine include nasal congestion, sore throat, and fever. Headaches, muscle aches, irritability, and malaise have also been noted. In most instances, if side effects occur, they only last a day or two. This nasal spray has been produced for conventional flu viruses and should not be given to pregnant women or anyone who has a medical condition that may compromise the immune system because in some instances the flu may be a side effect.
Some people do not receive influenza vaccine because they believe it is not very effective. There are several different reasons for this belief. People who have received influenza vaccine may subsequently have an illness that is mistaken for influenza, and they believe that the vaccine failed to protect them. In other cases, people who have received the vaccine may indeed have an influenza infection. Overall vaccine effectiveness varies from year to year, depending upon the degree of similarity between the influenza virus strains included in the vaccine and the strain or strains that circulate during the influenza season. Because the vaccine strains must be chosen
9-10 months before the influenza season, and because influenza viruses mutate over time, sometimes mutations occur in the circulating virus strains between the time the vaccine strains are chosen and the next influenza season ends. These mutations sometimes reduce the ability of the vaccine-induced antibody to inhibit the newly mutated virus, thereby reducing vaccine effectiveness. This commonly occurs with the conventional flu vaccines as the specific virus types chosen for vaccine inclusion are based on reasoned projections for the upcoming flu season. Occasionally, the vaccine does not match the actual predominating virus strain and is not very effective in generating a specific immune response to the predominant infecting flu strain.
How effective is the flu vaccine?
Vaccine efficacy also varies from one person to another. Past studies of healthy young adults have shown influenza vaccine to be 70%-90% effective in preventing illness. In the elderly and those with certain chronic medical conditions such as HIV, the vaccine is often less effective in preventing illness. Studies show the vaccine reduces hospitalization by about 70% and death by about 85% among the elderly who are not in nursing homes. Among nursing home residents, vaccine can reduce the risk of hospitalization by about 50%, the risk of pneumonia by about 60%, and the risk of death by 75%-80%. However, these figures did not apply to the 2014-2015 flu vaccine because the quadrivalent (four antigenic types) vaccine did not match well with 2014-2015 circulating strains of the flu (vaccine effectiveness was estimated to be 23%). This occurs because the vaccine needs to be produced months before the flu season begins, so the vaccine is designed by projecting and choosing the most likely viral strains to include in the vaccine. If drift results in changing the circulating virus from the strains used in the vaccine, efficacy may be reduced. However, the vaccine is still likely to lessen the severity of the illness and to prevent complications and death, according to the CDC.
do people need to get the flu (influenza) vaccine every year?
Although only a few different influenza virus strains circulate at any given time, people may continue to become ill with the flu throughout their lives. The reason for this continuing susceptibility is that influenza viruses are continually mutating, through the mechanisms of antigenic shift and drift described above. Each year,
researchers update the vaccine to include the most current influenza virus strains that are infecting people worldwide. The fact that influenza viral genes continually change is one of the reasons
people must get the vaccine every year. Another reason is that antibody produced by the host in response to the vaccine declines over time, and antibody levels are often low one year after vaccination so even if the same vaccine is used, it can act as a booster shot to raise immunity.
Many people still refuse to get flu shots because of misunderstandings, fear, "because I never get any shots," or simply a belief that if they get the flu, they will do well. These are only some of the reasons -- there are many more. The U.S. and other countries' populations need to be better educated about vaccines; at least they should realize that safe vaccines have been around for many years (measles, mumps, chickenpox, and even a vaccine for cholera), and as adults they often have to get a vaccine-like shot to test for tuberculosis exposure or to protect themselves from tetanus. The flu vaccines are as safe as these vaccines and shots that are widely accepted by the public. Consequently, better efforts need to be made to make yearly flu vaccines as widely acceptable as other vaccines. Susceptible people need to understand that the vaccines afford them a significant chance to reduce or prevent this potentially debilitating disease, hospitalization and, in a few, a lethal flu-caused disease.
What are some flu treatments an individual can do at home (home remedies)?
First, individuals should be sure they are not members of a high-risk group that is more susceptible to getting severe flu symptoms. Check with a physician if you are unsure if you are a higher-risk person.
The CDC recommends home care if a person is healthy with no underlying diseases or conditions (for example, asthma, lung disease, pregnant, or immunosuppressed).
Increasing liquid intake, warm showers, and warm compresses, especially in the nasal area, can reduce the body aches and reduce nasal congestion or head congestion. Nasal strips and humidifiers may help reduce congestion, especially while trying to sleep. Some physicians recommend nasal irrigation with saline to further reduce congestion; some recommend nonprescription decongestants like pseudoephedrine (Sudafed).
Over-the-counter fever-reducing medications like acetaminophen (Tylenol) or ibuprofen (Advil, Motrin and others)
can treat a fever. Read labels for safe dosage. Cough drops, over-the-counter cough syrup, or cough medicine that may contain dextromethorphan (Delsym) and/or guaifenesin (Mucinex)
can suppress a cough. Notify a doctor if an individual's symptoms at home get worse.
What types of doctors treat the flu?
Individuals with mild flu symptoms may not require the care of a physician unless they are a member of a high-risk group as described above. For many individuals, treatment is provided by their primary care physician or provider (including internists or family medicine specialists and physician assistants and other primary caregivers) or pediatrician. Complicated or severe flu infections may require consultation with an emergency-medicine physician, critical care specialist, infectious-disease specialist, and/or a lung specialist (pulmonologist).
What medications treat the flu?
The CDC published the following guidance concerning antiviral drugs:
Antiviral medications with activity against influenza viruses
(anti-influenza drugs) are an important adjunct to influenza vaccine in the control of influenza.
- Influenza antiviral prescription drugs treat influenza or to prevent influenza.
- Oseltamivir, zanamivir, and peramivir are chemically related antiviral medications known as neuraminidase inhibitors that have activity against both influenza A and B viruses.
- In October 2018 (10/24/2018), the FDA approved a new antiviral drug (baloxavir marboxil [Xofluza]) for flu treatment that prevents viral replication.
The CDC recommended the following antiviral medications for the treatment of influenza
(flu) for the 2020-2021 season: oral oseltamivir (Tamiflu),
inhaled zanamivir (Relenza), intravenous peramivir (Rapivab), and oral Baloxavir.
Over-the-counter medications that may help reduce symptoms of congestion (decongestants), coughing (cough medicine), and dehydration include diphenhydramine (Benadryl), acetaminophen (Tylenol), NSAIDs (Advil, Motrin, Aleve), guaifenesin (Mucinex), dextromethorphan (Delsym), pseudoephedrine (Sudafed), and oral fluids. Aspirin may be used in adults but not in children.
Antibiotics treat bacterial infections, not viral illnesses like the flu.
Individuals with the flu may also benefit from some additional bed rest, throat lozenges, and possibly nasal irrigation; drinking fluids may help prevent symptoms of dehydration (for example, dry mucus membranes and decreased urination).
What can people eat when they have the flu?
While a person has the flu, good nutrition can help the recovery process. Anyone with the flu needs to avoid dehydration, soothe sore throat and/or upset stomach, and have a good protein intake.
Avoid dehydration by maintaining an adequate fluid intake. Sore throat and upset stomach may be relieved by broths or warm soups (chicken, vegetable, or beef) and plain crackers, toast, and ginger tea or noncarbonated ginger ale. Scrambled eggs, yogurt, and/or protein drinks are good protein sources. In addition, bananas, rice, and applesauce are food that are often recommended for those with an upset stomach. This list is not exhaustive but should provide a balanced approach to help speed recovery from the flu.
When should a person go to the emergency department for the flu?
The CDC urges people to seek emergency medical care for a sick child with any of these flu effects (symptoms or signs):
- Fast breathing or trouble breathing (shortness of breath)
- Bluish or gray skin color
- Not drinking enough fluids
- Severe or persistent vomiting
- Not waking up or not interacting
- Being so irritable that the child does not want to be held
- Flu-like symptoms improve but then return with fever and cough
The following is the CDC's list of symptoms that should trigger emergency medical care for adults:
- Difficulty breathing or shortness of breath
- Pain or pressure in the chest or abdomen
- Sudden dizziness
- Severe or persistent vomiting
- Influenza-like symptoms improve but then return with fever and worse cough
- Having a high fever for more than
3 days is another danger sign, according to the WHO, so the CDC has also included this as another serious symptom.
Who should receive the flu vaccine, and who has the highest risk factors? When should someone get the flu shot?
In the United States, the flu season usually occurs from about November until April. Officials have decided each new flu season will start each year on Oct. 4. Typically, activity is very low until December, and peak activity most often occurs between January and March. Ideally, the conventional flu vaccine should be administered between September and mid-November. Flu season typically occurs between October and May. It takes about
1-2 weeks after vaccination for antibodies against influenza to develop and provide protection. The CDC has published a summary list of their current recommendations of who should get the current vaccine.
Summary of CDC influenza vaccination recommendations for 2020-2021
CDC information and guidance in this report includes the following taken
directly from the CDC:
Routine annual influenza vaccination is recommended for all persons aged ≥6 months who do not have contraindications.
A licensed vaccine appropriate for age and health status should be used. Consult package information for age indications.
Emphasis should be placed on vaccination of high-risk groups and their contacts/caregivers. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to (no hierarchy is implied by order of listing):
- Children aged 6 through 59 months
- Adults aged ≥50 years
- Persons with chronic pulmonary (including asthma), cardiovascular (excluding isolated hypertension), renal, hepatic, neurologic, hematologic, or metabolic disorders (including diabetes mellitus)
- Persons who are immunocompromised due to any cause, including (but not limited to) medications or HIV infection
- Women who are or will be pregnant during the influenza season
- Children and adolescents (aged 6 months through 18 years) receiving aspirin- or salicylate-containing medications who might be at risk for Reye syndrome associated with influenza
- Residents of nursing homes and long-term care facilities
- American Indians/Alaska natives
- Persons who are extremely obese (BMI ≥40 for adults)
- Caregivers and contacts of those at risk:
- Health care personnel, including all paid and unpaid persons working in health care settings who have potential for exposure to patients and/or to infectious materials, whether or not directly involved in patient care;
- Household contacts and caregivers of children aged ≤59 months (for example, <5 years), particularly contacts of children aged <6 months, and adults aged ≥50 years;
- Household contacts and caregivers of persons with medical conditions associated with increased risk of severe complications from influenza.
For more information and details too extensive to include here (for example, vaccine types, scheduled doses, side effects), the following site is recommended: Prevention and Control of Seasonal Influenza with Vaccines: Recommendations of the Advisory Committee on Immunization Practices (ACIP)
-- United States, 2020-21.
For additional information: MMWR Recomm Rep 2020;69(No. RR-8), at https://www.cdc.gov/vaccines/hcp/acip-recs/vacc-specific/flu.html.
What is the prognosis for patients who get the flu? What are possible complications of the flu?
In general, the majority (about 90%-95%) of people who get the disease feel terrible (see symptoms) but recover with no problems. People with suppressed immune systems historically have worse outcomes than uncompromised individuals; current data suggest that pregnant individuals, children under 2 years of age, young adults, and individuals with any immune compromise or debilitation are likely to have a worse prognosis. Complications
of long-term problems from the flu may worsen medical conditions such as asthma, congestive heart failure, and diabetes. Other complications may include ear infections, sinus infections, dehydration, pneumonia, and even death. In most outbreaks, epidemics, and pandemics, the mortality rates are highest in the older population (usually above 50 years old). Complications of any flu virus infection, although relatively rare, may resemble severe viral pneumonia or the SARS (severe acute respiratory syndrome caused by a coronavirus strain) outbreak in 2002-2003, in which the disease spread to about 10 countries with over 7,000 cases, over 700 deaths, and had a 10% mortality rate. Guillain-Barré syndrome (GBS), a rare immune disorder that can result in weakness or paralysis, may occur after having the flu or very rarely, after vaccination against the flu (estimated by the CDC to be about one person per every million people vaccinated).
Can the flu be deadly?
Yes. However, associated deaths per year depend upon the virulence of the particular strain of virus that is circulating. That means for any given year, the likelihood of dying from the flu varies according to the specific infecting viruses. For example, from 1976-2007 (the most reliable available data according to the CDC), deaths associated with the flu range from a low of about 3,000 per year to a high of about 49,000 per year. The CDC estimates about 36,000 deaths/year
(average) in the U.S. in recent years, but these may increase if vaccination rates continue to fall. The 1918 influenza pandemic (1918-1919) was estimated to cause 20-50 million deaths worldwide.
What is the bird (avian) flu?
The bird flu, also known as avian influenza and H5N1, is an infection caused by avian influenza A. Bird flu can infect many bird species, including domesticated birds such as chickens. In most cases, the disease is mild; however, some subtypes can be pathogenic and rapidly kill birds within 48 hours.
These bird viruses rarely infect humans. People who get infected with bird flu usually have direct contact with the infected birds or their waste products. Depending on the viral type, the infections can range from mild influenza to severe respiratory problems or death. Human infection with bird flu is rare but frequently fatal. More than half of those people infected (over 650 infected people) have died (current estimates of the mortality [death] rates in humans is about 60%). Fortunately, this virus does not seem to be easily passed from person to person. The major concern among scientists and physicians about bird flu is that it will change (mutate) its viral RNA enough to be easily transferred among people and produce a pandemic similar to the one of 1918. There have been several isolated instances where a person had been reported to get avian flu in 2010; the virus was detected in South Korea (three human cases), resulting in a quarantine of two farms, and in 2012, over 10,000 turkeys died in a H5N1 outbreak with no human infections recorded. Recent research suggests that some people may have had exposure to H5N1 in their past but had either mild or no symptoms.
In addition, researchers, in an effort to understand what makes an animal or bird flu become easily transmissible to humans, developed a bird flu strain that is likely easily transmitted from person to person. Although it exists only in research labs, there is controversy about both the synthesis and the scientific publication of how this potentially highly pathogenic strain was created.
Do antiviral agents protect people from the flu?
Vaccination is the primary method for control of influenza; however, antiviral agents have a role in the prevention and treatment of mainly influenza type A infection. Regardless, antiviral agents should not be considered as a substitute or alternative for vaccination. Most effectiveness of these drugs is reported to occur if the antivirals are given within the first 48 hours after infection; some researchers maintain there is little or no solid evidence these drugs can protect people from getting the flu so some controversies exist regarding these agents.
Is it safe to get a flu shot that contains thimerosal?
Thimerosal is a preservative that contains mercury and is used in multidose vials of conventional flu vaccines to prevent contamination when the vial is repeatedly used to extract the vaccine. Although thimerosal is being phased out as a vaccine preservative, it is still used in flu vaccines in low levels. There is no data that indicates thimerosal in these vaccines has caused autism or other problems in individuals. However, flu vaccine that is produced for single use (not a multidose vial) contains no thimerosal; however, these vials are not as readily available to doctors and likely cost more to produce. Consequently, the FDA has published these two questions with clear answers that are quoted below:
"Is it safe for children to receive an influenza vaccine that contains thimerosal?"
"Yes. There is no convincing evidence of harm caused by the small doses of thimerosal preservative in influenza vaccines, except for minor effects like swelling and redness at the injection site."
"Is it safe for pregnant women to receive an influenza vaccine?"
"Yes. A study of influenza vaccination examining over 2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine. Case reports and limited studies indicate that pregnancy can increase the risk for serious medical complications of influenza. One study found that out of every 10,000 women in their third trimester of pregnancy during an average flu season, 25 will be hospitalized for flu-related complications."
However, as stated above, the FDA goes on to say that single-dose vial of conventional and other flu vaccines will not contain the preservative thimerosal, so that if a person wants to avoid the thimerosal, they can ask for vaccine that comes in a single-dose vial. The nasal spray vaccine contains no thimerosal, but it is not recommended for use in pregnant women. The CDC further states, that after numerous studies, there is no established link between flu shots with or without thimerosal and autism.
Where can people find additional information about the flu?
During a flu pandemic, guidelines and situations can change rapidly. People are advised to be aware that several sources are available to them to keep current with developments. The web sites below are frequently updated, especially when a pandemic is declared. The first web site contains an update written for the public and caregivers; the government and WHO sites provide detailed information that are updated as guidelines and developments occur.
Will the flu shot help fight the coronavirus?
Coronavirus (SARS-CoV-2) is the cause of the current pandemic of COVID-19. This virus is different from the flu viruses, and there is no data suggesting a flu vaccine (shot) will protect you from getting COVID-19. However, flu vaccines reduce
the chances of getting the flu.
Can you get COVID-19 and the flu at the same time?
It is possible to get the flu and COVID-19 at the same time. There is little or no data available to determine how often such concurrent infections may happen.
Unfortunately, to date (9/12/2020), all coronavirus vaccines are still undergoing testing and are not yet approved for use by the FDA. However, there is no current information that suggests that getting a flu shot will cause any coronavirus vaccine to be unsafe to take. It is possible that taking the shots for both at the same time may reduce the immune response of one or both. Other studies seem likely to produce an answer.
Medically Reviewed on 9/17/2020
Demicheli, V., T. Jefferson, L.A. Al-Ansary, E. Ferroni, A. Rivetti, and C. Di Pietrantonj. "Vaccines for Preventing Influenza in Healthy Adults." Cochrane Database Syst Rev 13.3 March 2014: CD001269.
Grohskopf, L.A., L.Z. Sokolow, K.R. Broder, et al. "Prevention and Control of Seasonal Influenza with Vaccines: Recommendations of the Advisory Committee on Immunization Practices -- United States, 2018-19 Influenza Season." MMWR 67.3 Aug. 24, 2018: 1-20.
Lambert, L., and Fauci, A. "Influenza Vaccines for the Future." New Eng. J. Med. 361.21 (2010): 2036-2044.
Monto, A.S., Ohmit, S.E., Petrie, J.G., Johnson, E., Truscon, R., Teich, E., Rotthoff, J., Boulton, M., Victor, J.C. "Comparative Efficacy of Inactivated and Live Attenuated Influenza Vaccines." N Engl J Med 361 Sept. 24, 2009: 1260.
Nguyen, H. "Influenza." Medscape.com. Aug. 22, 2016. <http://emedicine.medscape.com/article/219557-overview>.
Perez-Padilla, R., de la Rosa-Zamboni, D., Ponce de Leon, S.P., Hernandez, M., Quinones-Falconi, F., Bautista, E., Ramirez-Venegas, A., Rojas-Serrano, J., Ormsby, C.E., Corrales, A., Higuera, A., Mondragon, E., Cordova-Villalobos, J.A. "Pneumonia and Respiratory Failure from Swine-Origin Influenza A (H1N1) in Mexico."
N Engl J Med 361 Aug. 13, 2009: 680.
Switzerland. World Health Organization. "Cumulative number of confirmed human cases for avian influenza A(H5N1) reported to WHO, 2003-2014." Jan. 24, 2014. <http://www.who.int/influenza/human_animal_interface/EN_GIP_20140124CumulativeNumberH5N1cases.pdf?ua=1>.
Switzerland. World Health Organization. "Global Influenza Strategy 2019-2030." March 11, 2019. <?https://www.who.int/influenza/en/>.?
United States. Centers for Disease Control and Prevention. "Estimating Seasonal Influenza-Associated Deaths in the U.S." Dec. 9, 2016.
United States. Centers for Disease Control and Prevention. "FluView Interactive." Apr. 21, 2017. <https://www.cdc.gov/flu/weekly/fluviewinteractive.htm>.
United States. Centers for Disease Control and Prevention. "Influenza (Flu)." Nov. 10, 2016. <https://www.cdc.gov/flu/>.
United States. Centers for Disease Control and Prevention. "Summary: 'Prevention and Control of Seasonal Influenza with Vaccines: Recommendations of the Advisory Committee on Immunization Practices (ACIP) -- United States, 2020-21'." <https://www.cdc.gov/flu/professionals/acip/summary/summary-recommendations.htm>.
United States. Centers for Disease Control and Prevention. "Seasonal Influenza (Flu): Influenza Antiviral Medications: Summary for Clinicians." Sept. 4, 2014.
United States. Centers for Disease Control and Prevention. "Seasonal Influenza (Flu):
Use of Antivirals." Sept. 1, 2011. <http://www.cdc.gov/flu/professionals/antivirals/antiviral-use-influenza.htm>.
United States. Centers for Disease Control and Prevention. "2011-2012 Trivalent Influenza Vaccine Data From the U.S. Vaccine Adverse Event Reporting System (VAERS)." <http://vaers.hhs.gov/resources/SeasonalFluSummary_2011-2012.pdf>.
United States. Centers for Disease Control and Prevention. "2009 H1N1 Flu (Swine Flu)." Oct. 12, 2009. <http://www.cdc.gov/H1N1FLU/>.
United States. Flu.gov. "H5N1 Avian Flu (H5N1 Bird Flu)." <http://www.flu.gov/about_the_flu/h5n1/>. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.