content
stringlengths 275
370k
|
---|
photograph of a Red-winged Blackbird by Rohan Kamath
was Charles Darwin who originally proposed that the
so-called secondary sexual characteristics of male animals
-- such as the elaborate tails of peacocks, bright plumage
or expandable throat sacs in many birds, large racks in
mooses, deep voices in men -- evolved because females
preferred to mate with individuals that had those features.
Sexual selection can be thought of as two special kinds of
natural selection, as described below. Natural selection
occurs when some individuals out-reproduce others, and those
that have more offspring differ genetically from those that
In one kind of sexual selection, members of one sex create a reproductive differential among themselves by competing for opportunities to mate. The winners out-reproduce
|the others, and natural
selection occurs if the characteristics that determine
winning are, at least in part, inherited. In the other kind
of sexual selection, members of one sex create a
reproductive differential in the other sex by preferring
some individuals as mates. If the ones they prefer are
genetically different from the ones they shun, then natural
selection is occurring.
In birds, the first form of sexual selection occurs when males compete for territories, as is obvious when those territories are on leks (traditional mating grounds). Males that manage to acquire the best territories on a lek (the dominant males) are known to get more chances to mate with females. In some species of grouse and other such birds, this form of sexual selection combines with the second form, because once males establish their positions on the lek the females then choose among them.
That second type of sexual selection, in which one sex chooses among potential mates, appears to be the most common type among birds. As evidence that such selection is widespread, consider the reversal of normal sexual differences in the ornamentation of some polyandrous birds. There, the male must choose among females, which, in turn, must be as alluring as possible. Consequently in polyandrous species the female is ordinarily more colorful -- it is her secondary sexual characteristics that are enhanced. This fooled even Audubon, who confused the sexes when labeling his paintings of phalaropes. Female phalaropes compete for the plain-colored males, and the latter incubate the eggs and tend the young.
There is evidence that female birds of some species (e.g., Marsh Wrens, Red-winged Blackbirds) tend to choose as mates those males holding the most desirable territories. In contrast, there is surprisingly little evidence that females preferentially select males with different degrees of ornamentation. One of the most interesting studies involved Long-tailed Widowbirds living in a grassland on a plateau in Kenya. Males of this polygynous six-inch weaver (a distant relative of the House Sparrow) are black with red and buff on their shoulders and have tails about sixteen inches long. The tails are prominently exhibited as the male flies slowly in aerial display over his territory. This can be seen from more than half a mile away. The females, in contrast, have short tails and are inconspicuous.
Nine matched foursomes of territorial widowbird males were captured and randomly given the following treatments. One of each set had his tail cut about six inches from the base, and the feathers removed were then glued to the corresponding feathers of another male, thus extending that bird's tail by some ten inches. A small piece of each feather was glued back on the tail of the donor, so that the male whose tail was shortened was subjected to the same series of operations, including gluing, as the male whose tail was lengthened. A third male had his tail cut, but the feathers were then glued back so that the tail was not noticeably shortened. The fourth bird was only banded. Thus the last two birds served as experimental controls whose appearance had not been changed, but which had been subjected to capture, handling, and (in one) cutting and gluing. To test whether the manipulations had affected the behavior of the males, numbers of display flights and territorial encounters were counted for periods both before and after capture and release. No significant differences in rates of flight or encounter were found.
The mating success of the males was measured by counting the number of nests containing eggs or young in each male's territory. Before the start of the experiment the males showed no significant differences in mating success. But after the large differences in tail length were artificially created, great differentials appeared in the number of new active nests in each territory. The males whose tails were lengthened acquired the most new mates (as indicated by new nests), outnumbering those of both of the controls and the males whose tails were shortened. The latter had the smallest number of new active nests. The females, therefore, preferred to mate with the males having the longest tails.
The widowbird study required considerable manipulation of birds in a natural environment that was especially favorable for making observations. Evidence for female choice of mates has also been accumulated without such intervention in the course of a 30-year study of Parasitic Jaegers (known in Great Britain as "Arctic Skuas") on Fair Isle off the northern tip of Scotland. The jaegers are "polymorphic" -- individuals of dark, light, and intermediate color phases occur in the same populations. Detailed studies by population biologist Peter O'Donald of Cambridge University and his colleagues indicate that females prefer to mate with males of the dark and intermediate phases, and as a result those males breed earlier than light-phase males. Earlier breeders tend to be more successful breeders, so the females choices increase the fitness of the dark males. O'Donald concludes that the Fair Isle population remains polymorphic (rather than gradually becoming composed entirely of dark individuals) because light individuals are favored by selection farther north, and "light genes" are continuously brought into the population by southward migrants.
Further work, including some, we hope, on North American species, is required to determine the details of female choice in birds. The effort required will be considerable, and suitable systems may be difficult to find, but the results should cast important light on the evolutionary origin of many physical and behavioral avian characteristics.
We know remarkably little about the origins of sexual selection. Why, for example, do female widowbirds prefer long-tailed males? Possibly females choose such males because the ability to grow and display long tails reflects their overall genetic "quality" as mates -- and the females are thus choosing a superior father for their offspring. Or the choice may have no present adaptive basis, but merely be the result of an evolutionary sequence that started for another reason. For instance, perhaps the ancestors of Long-tailed Widowbirds once lived together with a population of near relatives whose males had slightly shorter tails. The somewhat longer tails of males of the "pre-Long-tailed" Widowbirds were the easiest way for females to recognize mates of their own species. Such a cue could have led to a preference for long tails that became integrated into the behavioral responses of females. Although we are inclined to think the former scenario is correct, the data in hand do not eliminate the second possibility.
Copyright ® 1988 by Paul R. Ehrlich, David S. Dobkin, and Darryl Wheye. |
First-grade - Math concepts
Below is list of all worksheets available under this concept. Worksheets are organized based on the concept with in the subject.
Click on concept to see list of all available worksheets.
- Holiday Math: Valentine's Day Addition
If you want to get your child in the Valentine's Day mood and improve her addition and mental math ability, this pretty printable is sure to do the trick.
- Post Office Addition
Practice math with the postman! Word problems are a great way to help her understand a basic math concept from a different angle.
- Count 'Em Up: Cookie Addition
Sweeten up addition facts with cute cookie images. Let your kid count up the delectable cookies to boost her addition know-how!
- Fishin' for Simple Addition
This math worksheet is a great catch! Help your child practice his simple addition and counting.
- Addition Practice: A Helping Hand
In this addition practice worksheet, your 1st grader will solve addition problems with sums up to 9 using pictures of hands to help her count.
- Basketball Addition Facts #2
Here's a supplemental math activity that beats boring textbook math. Your child will work on his addition facts, practicing to become an addition all star!
- Addition Practice Sheet
Add a little fun to basic addition practice! Mini mathematicians can review addition facts as they figure out the hidden phrase at the bottom of the page.
- Valentine's Day Math Facts #6
Do some mental math with your young valentine in this combination math and color-by-number worksheet. Kids will solve simple addition and subtraction problems.
- Mystery Number
Find the mystery number in this intro-to-algebra worksheet that practice places value.
- Simple Word Problems
Create mathematical fruit salad as you add and subtract different types of fruit with this worksheet full of beginners' word problems.
- Boo! Trick or Treat Addition
Sneak some addition practice into Halloween night with this festively spooky math worksheet.
- Library Addition: Adding Book Stacks
This fun first grade math worksheet uses stacks of library books to help kids practice single-digit vertical addition.
- Count 'Em Up: Bell Pepper Addition
Boost your first grader's confidence with addition and counting with this veggie-filled single-digit addition worksheet.
- Count 'Em Up: Gummy Bear Addition
What kid doesn't love gummy bears? Sweeten up math practice with a little gummy bear addition!
- Math-Go-Round: Addition (Expert)
A board game perfect for sharpening your 1st-grader's addition skills.
- Find the Sum
Give your child practice with his math skills with this first grade worksheet, which is all about addition.
- Easter Word Problems #1
Get ready for Easter! These colorful word problems will get your child in the mood for this spring holiday as he practices basic math.
- The Make Ten Game
Make making ten a game with this extra-easy addition card game that comes with all the supplies you need.
- Math Search Puzzles
Math search puzzles are a fun and clever way of helping kids review math facts. Print 'em out and see how many your kid can find!
- Simple Addition Watermelons
Simple but playful this single digit addition worksheet makes for great math practice for your first grader.
- Count the Dots: Single-Digit Addition 30
Combining visual with number addition, this first grade math worksheet helps children understand the concept of addition.
- Hanukkah Math
This worksheet shares Hanukkah themed math problems with numbers under 10.
- Toy Addition
Introduce your little math whiz to word problems with this playful addition worksheet.
- Addition Games: Math Salad 1
Give your first grader some at-home math practice by asking him to solve this page full of one-digit addition problems. |
|NASA Tests Shape-Shifting Robot Pyramid for Nanotech Swarms||
Like new and protective parents, engineers watched as the TETWalker robot successfully traveled across the floor at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Robots of this type will eventually be miniaturized and joined together to form "autonomous nanotechnology swarms" (ANTS) that alter their shape to flow over rocky terrain or to create useful structures like communications antennae and solar sails. This technology has the potential to directly support NASA's Vision for Space Exploration.|
"This prototype is the first step toward developing a revolutionary type of robot spacecraft with major advantages over current designs," said Dr. Steven Curtis, Principal Investigator for the ANTS project, a collaboration between Goddard and NASA's Langley Research Center in Hampton, Va. Using advanced animation tools, Langley is developing rover operational scenarios for the ANTS project.
The robot is called "TETwalker" for tetrahedral walker, because it resembles a tetrahedron (a pyramid with 3 sides and a base). In the prototype, electric motors are located at the corners of the pyramid called nodes. The nodes are connected to struts which form the sides of the pyramid. The struts telescope like the legs of a camera tripod, and the motors expand and retract the struts. This allows the pyramid to move: changing the length of its sides alters the pyramid's center of gravity, causing it to topple over. The nodes also pivot, giving the robot great flexibility.
In January 2005, the prototype was shipped to McMurdo station in Antarctica to test it under harsh conditions more like those on Mars. The test indicated some modifications will increase its performance; for example, placing the motors in the middle of the struts rather than at the nodes will simplify the design of the nodes and increase their reliability.
The team anticipates TETwalkers can be made much smaller by replacing their motors with Micro- and Nano-Electro-Mechanical Systems. Replacement of the struts with metal tape or carbon nanotubes will not only reduce the size of the robots, it will also greatly increase the number that can be packed into a rocket because tape and nanotube struts are fully retractable, allowing the pyramid to shrink to the point where all its nodes touch.
These miniature TETwalkers, when joined together in "swarms," will have great advantages over current systems. The swarm has abundant flexibility so it can change its shape to accomplish highly diverse goals. For example, while traveling through a planet's atmosphere, the swarm might flatten itself to form an aerodynamic shield. Upon landing, it can shift its shape to form a snake-like swarm and slither away over difficult terrain. If it finds something interesting, it can grow an antenna and transmit data to Earth. Highly-collapsible material can also be strung between nodes for temperature control or to create a deployable solar sail.
Additionally, the nodes will be designed to disconnect and reconnect to different struts. If a meteoroid or rough landing punches a hole in the swarm, the system can heal itself by rejoining undamaged nodes. "Spacecraft are so expensive because failure in a single component can cripple the entire spacecraft, so extensive testing and redundant systems are employed to reduce the chance of catastrophic failure. We wouldn't live long if our bodies worked like this. Instead, when we get hurt, new cells replace the damaged ones. In a similar way, undamaged units in a swarm will join together, allowing it to tolerate extensive damage and still carry out its mission," said Curtis.
The pyramid shape is also fundamentally strong and stable. "If current robotic rovers topple over on a distant planet, they are doomed -- there is no way to send someone to get them back on their wheels again. However, TETwalkers move by toppling over. It's a very reliable way to get around," said Curtis.
Extensive research in artificial intelligence is underway to get the robots to move, navigate, and work together in swarms autonomously. The research includes development of a novel interface that integrates high-level decision-making with lower-level functions typically handled intuitively by living organisms, like walking and swarming behavior. All systems are being designed to adapt and evolve in response to the environment.
The research was initially funded by the Goddard Director's Discretionary Fund, which supports innovative projects with the potential for a high payoff even if the chance of success is low. Many past projects have blossomed into new instruments, flights, or research directions. McMurdo station is run by the National Science Foundation.
For images, movies and more information, refer to: http://www.nasa.gov/vision/universe/roboticexplorers/ants.html
Goddard Space Flight Center
H. Keith Henry
Langley Research Center |
Scientists led by a Bristol University team have pioneered an experimental method for quantum computers to perform calculations using photons travelling inside a silicon chip.
The design is based on getting two photons to travel through the multi-path chip on what is called a 'quantum walk', the quantum equivalent of how in classical physics a particle might get from A to B by via random points in between.
The mathematics around modelling what happens on this journey using one photon without 'decoherence' (interference that pulls the quantum particle back into a classical state) have been well explored, but the team, which included contributions from Japanese, Dutch and Israeli physicists, were able to model what happens for two photons for the first time.
The team hasn't explained in detail how they solved the formidable issues involved, but the implications for quantum computing theory are intriguing. A major line of development in quantum computing is using particle entanglement, an approach that forms the basis of many quantum bit (qubit) designs.
Quantum walks offer another path to create photonic qubits capable of performing useful calculations.
"Each time we add a photon, the complexity of the problem we are able to solve increases exponentially, so if a one-photon quantum walk has 10 outcomes, a two-photon system can give 100 outcomes and a three-photon system 1000 solutions and so on," said Professor Jeremy O'Brien, director of the Centre for Quantum Photonics at Bristol University.
"Using a two-photon system, we can perform calculations that are exponentially more complex than before," says Prof O'Brien. "This is very much the beginning of a new field in quantum information science and will pave the way to quantum computers that will help us understand the most complex scientific problems."
It's not universally accepted that quantum computers could be used to model the same sorts of calculations output by ordinary computers. It could be that their greatest contribution will be to model aspects of physics which are themselves hard to understand because of their 'quantumness' such as superconductivity and important chemical reactions.
Currently, science has to make do with vast numbers of mechanistic generalisations regarding such phenomena, which underlie almost everything that science works with.
The next frontier will be sending three photons on quantum walks trough the specially-designed chip.
"Now that we can directly realise and observe two-photon quantum walks, the move to a three-photon, or multi-photon, device is relatively straightforward, but the results will be just as exciting" said O'Brien.
Quantum computing 'breakthroughs' are claimed several times a year, and it's fair to say that as with any science sensitive to on theoretical advances, these claims are not necessarily hyperbole. Despite this, quantum computing is still a technology stuck on the drawing board where advances in the physical setup, mathematics and experimental models move knowledge forward in lots of tiny but important leaps.
The end result could be a generation of computing devices capable of performing not more calculations in a given time, but radically different ones that tells us different things.
Quantum principles are also used in the tense science of quantum cryptography, the distribution of encryption keys with absolute levels of certainty. |
Quantum computing is the area of study focused on developing computer technology based on the principles of quantum theory, which explains the nature and behavior of energy and matter on the quantum (atomic and subatomic) level. Development of a quantum computer, if practical, would mark a leap forward in computing capability far greater than that from the abacus to a modern day supercomputer, with performance gains in the billion-fold realm and beyond. The quantum computer, following the laws of quantum physics, would gain enormous processing power through the ability to be in multiple states, and to perform tasks using all possible permutations simultaneously. Current centers of research in quantum computing include MIT, IBM, Oxford University, and the Los Alamos National Laboratory.
The essential elements of quantum computing originated with Paul Benioff, working at Argonne National Labs, in 1981. He theorized a classical computer operating with some quantum mechanical principles. But it is generally accepted that David Deutsch of Oxford University provided the critical impetus for quantum computing research. In 1984, he was at a computation theory conference and began to wonder about the possibility of designing a computer that was based exclusively on quantum rules, then published his breakthrough paper a few months later. With this, the race began to exploit his ideas. However, before we delve into what he started, it is beneficial to have a look at the background of the quantum world.
Quantum theory's development began in 1900 with a presentation by Max Planck to the German Physical Society, in which he introduced the idea that energy exists in individual units (which he called "quanta"), as does matter. Further developments by a number of scientists over the following thirty years led to the modern understanding of quantum theory.
The Essential Elements of Quantum Theory:
- Energy, like matter, consists of discrete units, rather than solely as a continuous wave.
- Elementary particles of both energy and matter, depending on the conditions, may behave like either particles or waves.
- The movement of elementary particles is inherently random, and, thus, unpredictable.
- The simultaneous measurement of two complementary values, such as the position and momentum of an elementary particle, is inescapably flawed; the more precisely one value is measured, the more flawed will be the measurement of the other value.
Further Developments of Quantum Theory
Niels Bohr proposed the Copenhagen interpretation of quantum theory, which asserts that a particle is whatever it is measured to be (for example, a wave or a particle) but that it cannot be assumed to have specific properties, or even to exist, until it is measured. In short, Bohr was saying that objective reality does not exist. This translates to a principle called superposition that claims that while we do not know what the state of any object is, it is actually in all possible states simultaneously, as long as we don't look to check.
To illustrate this theory, we can use the famous and somewhat cruel analogy of Schrodinger's Cat. First, we have a living cat and place it in a thick lead box. At this stage, there is no question that the cat is alive. We then throw in a vial of cyanide and seal the box. We do not know if the cat is alive or if it has broken the cyanide capsule and died. Since we do not know, the cat is both dead and alive, according to quantum law - in a superposition of states. It is only when we break open the box and see what condition the cat is in that the superposition is lost, and the cat must be either alive or dead.
The second interpretation of quantum theory is the multiverse or many-worlds theory. It holds that as soon as a potential exists for any object to be in any state, the universe of that object transmutes into a series of parallel universes equal to the number of possible states in which that the object can exist, with each universe containing a unique single possible state of that object. Furthermore, there is a mechanism for interaction between these universes that somehow permits all states to be accessible in some way and for all possible states to be affected in some manner. Stephen Hawking and the late Richard Feynman are among the scientists who have expressed a preference for the many-worlds theory.
Which ever argument one chooses, the principle that, in some way, one particle can exist in numerous states opens up profound implications for computing.
A Comparison of Classical and Quantum Computing
Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra, operating with a (usually) 7-mode logic gate principle, though it is possible to exist with only three modes (which are AND, NOT, and COPY). Data must be processed in an exclusive binary state at any point in time - that is, either 0 (off / false) or 1 (on / true). These values are binary digits, or bits. The millions of transistors and capacitors at the heart of computers can only be in one state at any point. While the time that the each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in billionths of a second, there is still a limit as to how quickly these devices can be made to switch state. As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply. Beyond this, the quantum world takes over, which opens a potential as great as the challenges that are presented.
The Quantum computer, by contrast, can work with a two-mode logic gate: XOR and a mode we'll call QO1 (the ability to change 0 into a superposition of 0 and 1, a logic gate which cannot exist in classical computing). In a quantum computer, a number of elemental particles such as electrons or photons can be used (in practice, success has also been achieved with ions), with either their charge or polarization acting as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and behavior of these particles form the basis of quantum computing. The two most relevant aspects of quantum physics are the principles of superposition and entanglement .
Think of a qubit as an electron in a magnetic field. The electron's spin may be either in alignment with the field, which is known as a spin-up state, or opposite to the field, which is known as a spin-down state. Changing the electron's spin from one state to another is achieved by using a pulse of energy, such as from a laser - let's say that we use 1 unit of laser energy. But what if we only use half a unit of laser energy and completely isolate the particle from all external influences? According to quantum law, the particle then enters a superposition of states, in which it behaves as if it were in both states simultaneously. Each qubit utilized could take a superposition of both 0 and 1. Thus, the number of computations that a quantum computer could undertake is 2^n, where n is the number of qubits used. A quantum computer comprised of 500 qubits would have a potential to do 2^500 calculations in a single step. This is an awesome number - 2^500 is infinitely more atoms than there are in the known universe (this is true parallel processing - classical computers today, even so called parallel processors, still only truly do one thing at a time: there are just two or more of them doing it). But how will these particles interact with each other? They would do so via quantum entanglement.
Entanglement Particles (such as photons, electrons, or qubits) that have interacted at some point retain a type of connection and can be entangled with each other in pairs, in a process known as correlation . Knowing the spin state of one entangled particle - up or down - allows one to know that the spin of its mate is in the opposite direction. Even more amazing is the knowledge that, due to the phenomenon of superpostition, the measured particle has no single spin direction before being measured, but is simultaneously in both a spin-up and spin-down state. The spin state of the particle being measured is decided at the time of measurement and communicated to the correlated particle, which simultaneously assumes the opposite spin direction to that of the measured particle. This is a real phenomenon (Einstein called it "spooky action at a distance"), the mechanism of which cannot, as yet, be explained by any theory - it simply must be taken as given. Quantum entanglement allows qubits that are separated by incredible distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated.
Taken together, quantum superposition and entanglement create an enormously enhanced computing power. Where a 2-bit register in an ordinary computer can store only one of four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a quantum computer can store all four numbers simultaneously, because each qubit represents two values. If more qubits are added, the increased capacity is expanded exponentially.
Perhaps even more intriguing than the sheer power of quantum computing is the ability that it offers to write programs in a completely new way. For example, a quantum computer could incorporate a programming sequence that would be along the lines of "take all the superpositions of all the prior computations" - something which is meaningless with a classical computer - which would permit extremely fast ways of solving certain mathematical problems, such as factorization of large numbers, one example of which we discuss below.
There have been two notable successes thus far with quantum programming. The first occurred in 1994 by Peter Shor, (now at AT&T Labs) who developed a quantum algorithm that could efficiently factorize large numbers. It centers on a system that uses number theory to estimate the periodicity of a large number sequence. The other major breakthrough happened with Lov Grover of Bell Labs in 1996, with a very fast algorithm that is proven to be the fastest possible for searching through unstructured databases. The algorithm is so efficient that it requires only, on average, roughly N square root (where N is the total number of elements) searches to find the desired result, as opposed to a search in classical computing, which on average needs N/2 searches.
The Problems - And Some Solutions
The above sounds promising, but there are tremendous obstacles still to be overcome. Some of the problems with quantum computing are as follows:
- Interference - During the computation phase of a quantum calculation, the slightest disturbance in a quantum system (say a stray photon or wave of EM radiation) causes the quantum computation to collapse, a process known as de-coherence. A quantum computer must be totally isolated from all external interference during the computation phase. Some success has been achieved with the use of qubits in intense magnetic fields, with the use of ions.
- Error correction - Because truly isolating a quantum system has proven so difficult, error correction systems for quantum computations have been developed. Qubits are not digital bits of data, thus they cannot use conventional (and very effective) error correction, such as the triple redundant method. Given the nature of quantum computing, error correction is ultra critical - even a single error in a calculation can cause the validity of the entire computation to collapse. There has been considerable progress in this area, with an error correction algorithm developed that utilizes 9 qubits (1 computational and 8 correctional). More recently, there was a breakthrough by IBM that makes do with a total of 5 qubits (1 computational and 4 correctional).
- Output observance - Closely related to the above two, retrieving output data after a quantum calculation is complete risks corrupting the data. In an example of a quantum computer with 500 qubits, we have a 1 in 2^500 chance of observing the right output if we quantify the output. Thus, what is needed is a method to ensure that, as soon as all calculations are made and the act of observation takes place, the observed value will correspond to the correct answer. How can this be done? It has been achieved by Grover with his database search algorithm, that relies on the special "wave" shape of the probability curve inherent in quantum computers, that ensures, once all calculations are done, the act of measurement will see the quantum state decohere into the correct answer.
Even though there are many problems to overcome, the breakthroughs in the last 15 years, and especially in the last 3, have made some form of practical quantum computing not unfeasible, but there is much debate as to whether this is less than a decade away or a hundred years into the future. However, the potential that this technology offers is attracting tremendous interest from both the government and the private sector. Military applications include the ability to break encryptions keys via brute force searches, while civilian applications range from DNA modeling to complex material science analysis. It is this potential that is rapidly breaking down the barriers to this technology, but whether all barriers can be broken, and when, is very much an open question. |
Centuries ago, during the time when pirates reigned at sea and explorers discovered unknown lands, a mysterious disease existed, one that caused a slow and painful death. Now, that same disease in making a comeback, but rather than surfacing in far-off lands, it has appeared in places closer to home.
Onset symptoms of scurvy include fatigue, nausea, and joint pain, but as the disease progresses, it can cause swollen gums, severe bruising, damaged hair, and bleeding into the joints and muscles. In children, the symptoms can affect the bones, causing stunted growth. In severe cases, scurvy can lead to death from complications such as internal hemorrhaging. Fortunately, scurvy is easy to treat: just increase your vitamin C intake.
Scurvy can be traced all the way back to 1550 BCE among the ancient Egyptians; however, it is most famous for the impact it had on 18th-century mariners. Extended periods of time at sea, sometimes with no end in sight, meant a shortage of fresh fruits and vegetables on the ship. The disease would inevitably destroy pirates and severely affect the British Royal Navy, whose sailors were more likely to die from the disease than by combat. In fact, scurvy is thought to be the most common cause of deaths at sea, surpassing deaths from fatal storms, shipwrecks, battles, and other diseases combined.
The disease has also taken a toll on various explorers, such as those part of Robert Falcon Scott’s 1901 Discovery expedition to Antarctica. Though Scott disagreed with the slaughter of penguins, he and his team ultimately resorted to consuming fresh seal and penguin meat to avoid the symptoms of scurvy.
Today, scurvy is found mainly in developing countries, where malnutrition is most common. Yet, scurvy has also been found in countries where people are likely to have more access to vitamin C-rich foods.
These occurrences have been more closely explored in the documentary Vitamania. Erich Churchill, a medical doctor who practices in Springfield, Massachusetts, and is featured in the film, explained that his team is responsible for the diagnosis of 20-30 cases of scurvy within the past six years.
“Many people who have difficulty affording food tend to go for food that is high-fat, high-calorie, and very filling,” Churchill said in the documentary. “If you have a limited food budget, those are the meals that will fill you up and will satisfy you more than eating fruits and vegetables.”
Consequently, those of lower socioeconomic status in wealthy countries are deeply affected and at high risk of suffering from this disease.
“Scurvy stands out in our minds as something that is so basic and easy to avoid, and yet these people have ended up falling victim to an illness that simply should not exist in a developed country,” said Churchill.Related: 10 Early Signs to Warn You About Vitamin C Deficiency
While eating vegetables and fruits can greatly lower the risk of scurvy, it is important to note that the way we cook them can also have an important effect, which many overlook. Overcooking vegetables can destroy the vital vitamins within them. Great sources of vitamin C include tomatoes, oranges, peppers, guavas, strawberries, and coriander. |
Adding, subtracting, multiplying, and dividing integers
Simplifying using order of operations
Writing variable expressions
Evaluating variable expressions
Writing equations to represent word problems
Completing function tables
Solving 1-step equations by adding/subtracting
Solving 1-step equations by multiplying/dividing
Solving 2-step equations (includes integers)
Writing linear functions
Graphing linear functions
Writing multiplication expressions using exponents
Using scientific notation
+ See More
Free Sample Pages Available
Our math worksheets introduce a puzzle aspect to math, giving students immediate feedback as to whether or not they are solving problems correctly. If the answer to the riddle isn't spelled correctly, the student knows which problems he's made an error on.
Fun Puzzle Aspect
Problem Solving Motivation
Each math worksheet contains a riddle that the student solves by completeing all the problems on the worksheet. This keeps kids motivated to complete each problem so that they can find the answer to the riddle.
Common Core Aligned
All our math worksheet packs are designed with Common Core in mind. That way you don’t have to worry about whether your math ciriculum is aligned or not when you incorpoate ClassCrown Riddle-Me-Worksheets in your lesson plans.
High Quality Design
Each page of our math worksheets has been produced in high resolution at 144 dpi and designed in full, vibrant color for maximum quality. They look stunning whether you are printing in color or black and white.
High Resolution (144 dpi)
Stunning Color & Clarity
Fun math worksheets with riddles to keep kids motivated. |
You Feed Me, I Feed You: Symbiosis
Click to enlarge
Click to enlarge
Some organisms in the ocean have developed a special relationship with each other that helps ensure the survival of both organisms. In many cases, the pair includes a microbe and a host animal. The microbes provide their host animal with food and the host provides the microbes with either some of the things they need to survive or a home—often both.
This kind of relationship, in which both organisms obtain some benefit from the other, is known as mutualism. It is one kind of symbiosis , which is a close physiological relationship between two different kinds of organisms for the majority of their lifecycle.
You have probably heard that “symbiosis” means that both partners benefit. That’s how the word is used in everyday speech. Some biologists use it that way, too, but technically the word refers to a variety of close relationships, not just those in which both partners benefit. In some symbiotic relationships, one of the organisms benefits but the other is harmed. That is called parasitism. An example of this is a tapeworm in a human. The tapeworm gains nourishment, while the human loses nutrients. In other symbiotic relationships, one of the organisms benefits and the other is neither helped nor harmed. That is called commensalism. An example of this would be an orchid growing on a tree. The orchid gets better access to light, while the tree is not hurt or helped by the orchid’s presence.
Symbiosis can occur between any two kinds of organisms, such as two species of animals, an animal and microbes, a plant and a fungus, or a single-celled organism such as a protist and bacteria. In some cases, it’s easy to see how each partner is affected by the relationship. In other cases, it is very difficult.
Mutualistic symbiosis in the ocean
A well-known example of mutualism occurs in shallow, sunlit waters around the world, where corals live a symbiotic life with one-celled algae called zooxanthellae (zoh-zan-THEL-y). The algae live inside the coral polyp and perform photosynthesis, converting energy from the sun and carbon dioxide into organic matter and chemical energy. In the process, they give off oxygen and other nutrients that the coral needs to live. The coral polyp provides its zooxanthellae with carbon dioxide, shelter, and some nutrients.
Mutualistic relationships also occur in the deep ocean, between microbes and a wide range of animals including corals, tubeworms, and mussels. Many of these are found at cold seeps or at hydrothermal vents. Sunlight cannot penetrate into the deep ocean, so the organisms that live there cannot do photosynthesis. They must rely on a different source of energy.
At cold seeps and hydrothermal vents, there are many chemicals that microbes can use to create food and energy. Hydrogen sulfide (the stuff that smells like rotten eggs) and methane are two of the most common of these. Both are toxic to animals, but certain bacteria are able to use these compounds to make organic matter through a process called chemosynthesis .
Where hydrogen sulfide is present in the seafloor around cold seeps, tubeworms are often found growing in clusters of thousands of individuals. These unusual animals do not have a mouth, stomach, or gut. Instead, they have a large organ called a trophosome that contains billions of chemosynthetic bacteria. In some cases, the trophosome accounts for more than half the weight of the tubeworm.
The tubeworms collect hydrogen sulfide from the sediment with a long “root” and oxygen from the water with their plumes, and transport them to their trophosome. The bacteria then use these materials plus carbon dioxide they take from the water to produce organic molecules. This provides 100% of the nutrition the tubeworm needs. A similar symbiotic relationship is found in clams and mussels that have chemosynthetic bacteria living in association with their gills.
A variety of other organisms found in cold seep communities also use tubeworms, mussels, and hard and soft corals as sources of food or shelter or both. These animals are known as associates. They include snails, eels, sea stars, crabs, lobsters, isopods, sea cucumbers, and fishes. Some of these might be symbiotic interactions, but the specific relationships between these organisms and the other animals living around cold seeps have not been well studied.
Mutualistic symbiosis also occurs between protists and bacteria or archaea, especially those that live in extreme environments.
Protists are single-celled eukaryotes such as diatoms, foraminifera, and ciliates. Eukaryotic cells have a nucleus and other organelles surrounded by a membrane. Plants, fungi, and animals are also eukaryotes. Bacteria and archaea are prokaryotes, which are single-celled organisms that do not have a nucleus or other organelles surrounded by a membrane.
Many species of protists thrive in the Deep Hypersaline Anoxic Basins (DHABs) of the eastern Mediterranean Sea. DHABs are among the most extreme environments on Earth. Organisms living there face complete darkness, up to ten times the salinity of normal seawater, complete lack of oxygen, very high pressure, and in some cases, high levels of sulfide or methane, both of which are toxic for most eukaryotes, including protists.
All of the protists that have been collected from DHABs have bacteria closely associated with them. Some are completely covered with bacteria. Others have bacteria inside their single-celled body, enclosed in a membrane. Some have bacteria both inside and outside, and many have more than one kind of bacteria.
Because each kind of protist appears to host specific kinds of bacteria, and the protists are never found without bacteria, scientists think the protists and bacteria are symbiotic (mutualistic) partners. One possible scenario is that the bacteria could detoxify sulfide for the protist, and the protist shelters the bacteria and moves to keep the bacteria in a place where they have access to the chemical nutrients they need. However, figuring out exactly what their relationship is, and what each partner gains from the relationship, has proven to be very difficult. The protists rarely survive being brought to the surface, and few of them can be kept alive in a lab long enough to study how they live. Microbiologists are hard at work to solve the mystery. |
AIDS (Acquired Immune Deficiency Syndrome) is caused by the human immunodeficiency virus (HIV). The virus attacks and destroys white blood cells, which are the body's first line of defense, leaving the individual vulnerable to a variety of illnesses and cancers.
A person who tests positive (i.e., seropositive) for HIV can transmit the virus, but can also be symptom-free. It can take up to 10 years or more between the time of infection by HIV and the appearance of full-blown AIDS.
HIV develops slowly, first attacking the body's immune system as it progresses through various stages that are often symptom-free. In the weeks following exposure to the virus, some individuals may experience flu-like symptoms. These symptoms usually disappear on their own within a week to a month. Some of those symptoms include:
- muscle and/or joint pain;
- sore throat;
- skin rash.
As HIV continues to spread through the body, there are no symptoms. Some individuals may be symptom-free for several years during this stage. In fact, treatments are aimed at extending this stage for as long as possible.
After several years, the body begins to show signs of weakening:
- persistent diarrhea.
- swollen nodes in the neck, armpits, and/or groin;
- shortness of breath
- extreme and persisting fatigue;
- skin infections;
- unexplained and significant weight loss;
- nocturnal sweats;
Other infections, such as herpes and various fungal infections, can also occur as the immune system gradually becomes weaker. That is when HIV reaches the AIDS stage. In time, the immune system becomes so weak that the person may succumb to pneumonia, say, or cancer (or both). Thus, an HIV infection is not lethal by itself. Rather, it is the serious illnesses that the individual catches that are life threatening.
Who can get HIV?
HIV does not discriminate! Men, women, children, and infants can all catch it and thus become vulnerable to the many infections and illnesses that make up AIDS.
HIV can be spread by blood, semen, vaginal secretions, and breast milk. It is mainly transmitted through unprotected sexual contact with an infected partner during anal or vaginal penetration or during oral sex. HIV can also be transmitted by contact with contaminated blood, such as occurs during the sharing of contaminated needles or syringes. In addition, infected women can transmit the virus to their infants during pregnancy, delivery, or breast-feeding.
In Canada, all blood donations have been subjected to testing since 1985. There is still, unfortunately, a tiny risk of getting the infection through a blood transfusion because HIV can only be detected in the blood 3 months after the infection has occurred. On the other hand, people cannot get HIV from donating blood; all material used in the blood donor process is sterile and disposable (used a single time).
HIV is a very fragile virus; it cannot exist long outside the human body and it cannot survive in the air, in water, in soil, or on objects. In addition, HIV is easily destroyed by common household disinfectants. It does not spread during social contacts and daily activities; you cannot catch HIV by touching infected people, working with them, or eating foods they've handled. And mosquitoes cannot spread the virus either.
Your risk of being infected with HIV depends not on who you are but rather on what you do. High-risk behaviours include the sharing of contaminated needles and syringes, and having anal, vaginal, or oral relations with a potentially infected partner. Of course, your risk is even higher if you engage frequently in such behaviours.
Avoid sexual contact with occasional partners. Unless you are absolutely sure that neither you nor your partner is infected with HIV, always use a lubricated latex or polyurethane condom with spermicide.
If you travel to a non-industrialized country, refuse all blood transfusions, unless it's a life-or-death situation. If you need an injection, demand that it be made with your own needles and syringe or that the material they use has never been used before (i.e. straight out of the package).
If you think that you have been in contact with HIV, seek medical advice immediately and be tested (ELISA test). Early detection is the key to controlling the effects of the virus and preserving the immune system in good health as long as possible. Delaying the evolution of the disease and the appearance of opportunistic infections (serious infections that take advantage of a weakened immune system) will improve quality of life.
In Canada, there are more than 55,000 people living with HIV. Despite research efforts, no cure or vaccine has been developed yet. Treatments are costly and constantly changing because the virus itself mutates and creates new strains. Prevention remains essential.
For more information or for support :
Canadian AIDS Society
Canadian AIDS Treatment Information Exchange |
Right now, NASA is celebrating a Year of Education on the International Space Station (ISS), and two of the astronauts are former teachers. As part of this celebration, NASA has made available several STEM activities related to the ISS and its role in helping us reach Mars.
On their website at nasa.gov/stemonstation, you will find lesson plans, videos, and news, in addition to a wealth of other information for both teachers and students. Students can watch a livestream of Earth as viewed from space, learn about what research is conducted aboard the ISS, and watch the astronauts do experiments. They can also learn about the station itself and what life in space is like. They can even connect with the astronauts for live question-and-answer sessions. How cool is that?
Teachers will find Learning Launchers containing lesson plans, videos, and other resources on a variety of topics related to the ISS. Some of the titles include Robotics, Space Station and the Economy, The Brain in Space, and Under the Microscope (microbes), but there are many more to choose from if none of these appeal to you. |
Edexcel GCSE 9-1 Past Papers across Maths, Biology, Physics, Chemistry, English Language and many more subjects.
Pearson Edexcel is a private examination board. It offers school certifications globally based on the British curriculum. Edexcel includes GCSE, A level, NVQ, functional skills and international GCSEs. The board conducts GCSE examinations each year. The questions are regarded as one of the toughest.
However, the basic difference lies in the approach to the papers. Past papers give a glimpse of the question pattern. It also allows students to know the marking scheme. With continuous practice, they can also ace time management. The official website of Pearson Edexcel grants easy access to past papers. Students can download it and solve it to gain momentum.
About the board
Edexcel was established in 1996. Pearson plc owns this board from 2005. Edexcel is a universally recognized board. It allows students to study subjects of their choice. It offers 40 subjects to the school leaving students (age group 14-16). The board offers world-class teaching aid both online and offline.
Edexcel exams need comprehensive preparation. Students should thoroughly go through the past papers. Initially, they should not clock while solving. Students should pay attention to the parts that create difficulty. It will help them in pointing out the easy and tough parts for them. They should invest more time in tough parts. Students should go through past papers. It will help them in managing questions in the exam hall.
Type of assessments
The whole assessment is divided into 5 parts. Students are required to familiarize themselves with each pattern. Solving past year question papers helps here too. Students need to practice writing answers according to the question asked. Here are the 5 types of questions asked:
Multiple choice questions (MCQs): Four options are offered for each question. Students need to select the correct response. Some questions may carry five options and they need to select two correct responses. Guessing is not going to help here. So paying complete attention to the whole chapter is important. This part is meant to assess the overall understanding of the students.
Short open response: Students need to answer the question in one word or one sentence. It usually carries 1-3 marks. Practice to write a crisp and accurate answer is required here.
Open response: This type of question requires a short explanatory answer. It carries four marks. Students should stay away from faffing in this part. Try to include maximum relevant points in minimum words. Use scientific terms and include diagrams.
Calculation: The marks weightage for this section varies in every subject. Students should solve it stepwise. They should mention the formulae and theorems. Even if their final answer is wrong, they may get marks if the steps are correct.
Extended open response: This section also has varied marks allocation. It is meant to assess the argumentative ability of the students. They should practice writing a well-informed answer with proper facts.
Is the exam hard?
Edexcel is notorious for being tougher than AQA. However, students should note that the passing marks in Edexcel are 50% while it is 70% in AQA. The exam needs students to write crisp and informed answers leaving little room for faffing. The exam is not tough at all. Practicing past papers is always helpful. |
Learning about surface ocean waves is a little like learning the guitar: the basics are relatively easy and in some cases all you need to know. But sometimes you’ve got to probe a little deeper, and that’s where it gets tricky. So consider this post the “Smoke on the Water” of waves knowledge. In the case of surface waves, the basics are what is called “linear wave theory.” We rely on a few assumptions, namely that the wave heights are small compared to their wavelength and depth and that the water is incompressible (not easily squeezed), inviscid (not very viscous), and irrotational (no spinning of water blobs). It turns out that for most purposes, these conditions hold and linear theory is an accurate estimate of how the waves will move.
But I’m getting ahead of myself. If you’ve never thought much about surface waves you might be asking, “what causes the waves?” or “what’s actually happening in a wave?”. Surface waves are caused by gravity acting on the interface between the heavy water of the ocean and light air of the atmosphere. That’s why they are sometimes called “surface gravity waves.” It’s sort of a “what goes up must come down” situation: an upward disturbance in the water surface wants to return to its original state, but it overcompensates and so now it’s below the rest of the water, kind of like a person on a trampoline. Waves are described by their wavelength (the distance between two crests, it’s inverse is called wavenumber), period (the time it takes for two crests to pass a fixed point, it’s inverse is called frequency), or amplitude (the height of the waves from the still water level to the crest, the crest-to-trough height is called the wave height). Another important description is the phase speed (literally the wavelength divided by the period, or the speed of a crest). Here’s a handy schematic:
I need to return to linear wave theory to answer the question of what’s happening in a surface wave. Without going too far down the rabbit hole, it turns out that using the assumptions I described in the first paragraph you can simplify the complicated equations that govern water motion down to a form that can be solved by hand. There are two important results: wave orbital motion and the dispersion relation. Orbital motion means that in waves, water actually moves in closed loops (“orbits”). If you’ve done much wading in the ocean you’ve felt the horizontal “pull” of waves as a crest approaches. In deep water, where we’ll be making measurements, this orbit is nearly circular, meaning there’s as much back-and-forth motion as up-and-down. This has very important ramifications for one of our instruments that I’ll talk about sometime later.
The dispersion relation is a little trickier to describe without math, but the punchline is this: if you know one of those wave characteristics I mentioned above (period, wavelength, or phase speed), you know all of them. You might be aware of this without even really knowing it. If you drop a stone in a lake, you have a basic idea of what speed the waves are going to propagate at. That’s because it’s all controlled by gravity. So if you had a tank of water on the moon, for example, things would be look much different. It’s called the “dispersion” relation because different length waves move at different speeds. So a group of waves will “disperse” or spread out, with long waves moving faster than short ones. This is different than most waves you know about (light, sound, waves on a string), which move at the same speed no matter the wavelength.
Alright let’s wrap this up for now. There’s only a little bit more you’ll need to know before I can talk about more of the fun stuff we’ll be doing on the cruise, but I’ll save that for a later date. If you made it this far, I’m very proud of you. |
Life pivots completely on energy. We cannot talk about living beings without considering the interchanges of energy that they make with their environment. If these interchanges did not take place or if a source of energy like our Sun did not exist, the living beings on Earth would not exist either.
Consequently, biologists must thoroughly understand the ways by which our planet acquires energy, the amount of energy that it receives in a given period of time, annual balance of that energy in different Earth subsystems and how living beings can take advantage of such energy.
The following article is a summary of the amount of energy that receives our planet from the Sun (See Figure 3), its magnitudes and of how it is distributed in the terrestrial system.
I have included the amount of incident solar energy upon each planet and the planetoid Pluto so that you have an idea of the privileged situation of our planet in the neighborhood of the Solar System.
AMOUNT OF INCIDENT SOLAR RADIATION UPON EACH PLANET
GPL = QSun / 4π (POR)^2
GPL is the amount of incident solar radiation upon the planet.
QSun is the total amount of energy emitted by the Sun expressed in Watts (3.94832e+26 W).
4π = 12.56637061
POR is the Planet Orbital Radius, expressed in meters.
VALUES OF THE TOTAL INCIDENT ENERGY UPON EACH PLANET:
Mercury = 9.449.43 W/m^2
Venus = 2687.6 W/m^2
Earth = 1402.8 W/m^2
Mars = 612.55 W/m^2
Jupiter = 52.34 W/m^2
Saturn = 17.2 W/m^2
Uranus = 3.89 W/m^2
Neptune = 1.55 W/m^2
Pluto (Planetoid) = 0.8998 W/m^2
Average of incident solar energy upon Earth during Aphelion = 1359.02 W/m^2
Average of incident solar energy upon Earth during Perihelion = 1452.77 W/m^2
THEORETICAL EARTH’S ANNUAL ENERGY BUDGET:
1365 W/m^2 is the annual average of total solar radiation measured by satellites on Top Of the Atmosphere (TOA). (Please, see Figure 1)
682.64 W/m^2 are thermal radiation impinging on TOA.
Ratm + 136.528 W/m^2 (20%) are reflected by the atmosphere, especially by clouds and dust.
136.528 W/m^2 (20%) are absorbed directly by the atmosphere, in particular by ozone, clouds and dust.
Satm + 40.958 W/m^2 (6%) are scattered by molecular oxygen and nitrogen at the upper atmosphere.
After mitigation by the atmosphere, the thermal radiation impinging on the surface is:
Insolation = 682.64 W/m^2- 136.528 W/m^2- 136.528 W/m^2- 40.958 W/m^2= 368.626 W/m^2.
From 368.626 W/m^2, the surface reflects 25.8 W/m^2 (7%).
342.8 W/m^2 are received on Earth's surface, which is formed by the cryosphere (snow and ice), the lithosphere (land), the hydrosphere (oceans) and the biosphere (living beings).
From the solar thermal radiation received, the surface (land and oceans) absorbs 239.96 W/m^2.
A - Total of solar thermal radiation lost directly into space before it hits on the surface:
Ratm + Satm = 177.5 W/m^2.
B - 136.528 W/m^2 (19.9%) are absorbed directly by the atmosphere.
C - Total of solar thermal radiation impinging on the surface after mitigation = 368.626 W/m^2.
Total ThR impinging on TOA = A + B + C = 177.486 W + 136.51 W + 368.626 W = 682.64 W/m^2
Total received on the surface = 343 W/m^2.
Total absorbed by the surface = 240 W/m^2.
BALANCE OF ENERGY ABSORBED AND EMITTED BY THE SURFACE:
Total solar thermal radiation absorbed by the surface = 240 W/m^2.
Per each 100 W/m^2, the surface emits:
Sensible Heat Flux = 12%.
Latent Heat of Evaporation = 48%
Directly lost to outer space = 12%
Transferred to the atmosphere by radiation = 14%
Dissipated as dynamic energy in sinks: 14%.
From 240 W/m^2 of thermal energy absorbed, the surface emits:
D - Sensible Heat Flux (Convective Heat Transfer): 28.8 W/m^2.
E - Emitted by the surface to the atmosphere as latent heat of evaporation: 115.2 W/m^2.
F - Emitted by the surface directly to the outer space: 28.8 W/m^2.
G - Transferred to the atmosphere by radiation: 33.6 W/m^2.
H – Dissipated as dynamic energy in sinks: 33.6 W/m^2.
The remainder 103 W/m^2 are distributed as follows:
I - 98.7% are transferred to subsurface materials (by conduction and convection) and is transformed into unusable internal and potential energy of atmosphere, hydrosphere, biosphere, cryosphere, and lithosphere: 101.661 W/m^2. (Peixoto and Oort, 1992)
J - 0.8 % is absorbed by autotrophic stratum (biosphere): 0.824 W/m^2
K - 0.5% is transferred to currents and waves (thermal kinetic energy): 0.515 W/m^2
Consequently, net thermal radiation emitted by the surface is:
ThRs = D + E + F + G + H + I + J + K = 28.8 W + 115.2 W + 28.8 W + 33.6 W + 33.6 W + 101.661 W + 0.824 W + 0.515 W = 343 W
This energy is lost into the outer space; as a rule, it is emitted during nighttime, or it is compensated by gravity field.
If environmental adiabatic lapse rate (λ) is lower than dry adiabatic lapse rate (λd), the equilibrium will be stable. If both environmental and dry adiabatic lapse rates are identical, static equilibrium will be neutral. If λ is higher than λd, static equilibrium will be unstable.
The later happened during my last experiment, so the phenomenon is observable. If the surrounding parcels of air are not in hydrostatic equilibrium, the air parcel’s static stability will turn chaotic. The latter often happens with climate; that’s why climate science is plenty of physics errors and misinterpretations.
Summary from previous section:
- Bolometric Solar Irradiance is only mitigated by distance on its way towards the Earth. An amount of Solar Irradiance is mitigated by interplanetary dust and the Sun’s gravity field.
- The bolometric solar irradiance on TOA is approximately 1365 W/m^2.
- From the bolometric solar irradiance on TOA, 682.64 W/m^2are thermal radiation, i.e. Energy that can be transferred as heat or work.
- After penetrating the Earth’s atmosphere, the solar thermal radiation is mitigated by absorption, scattering and reflection by the atmosphere before it strikes on the surface (biosphere, cryosphere, lithosphere and hydrosphere). As it strikes on the surface, part of the incident thermal radiation is reflected by the surface and incident thermal radiation decreases to 343 W/m^2. (Please, see Figure 2)
- From incident thermal radiation of 343 W/m^2, the Earth’s surface absorbs ~240 W/m^2.
- Measurements on the hemisphere facing the Sun, at Zenith angle, give a Flux of Solar Power (S) on Earth’s surface of ~1000 W/m2, which is introduced to calculate local and regional insolation at a given hour of the day. Incident solar thermal radiation diminishes according to the angle of incidence.
- The formula to calculate insolation is I = S * (Z)
- Where I is for insolation, S is for flux of solar power at Zenith angle that is 1000 W/m^2, and Z is for Zenith angle obtained from considering latitude, solar angle of incidence and day hour.
THERMAL EFFICIENCY AND DOWNWARD RADIATION FROM THE ATMOSPHERE
The Kelvin-Planck formulation of the second law of thermodynamics states that it is impossible for a system to go through a cyclical process whose only effect is the heat flow towards the system from a warm reservoir and that the system renders an equivalent amount of work on the medium.
In other words, the second law of thermodynamics establishes that no process in nature is 100% efficient.
Another interpretation of the second law states that the heat always flows from higher energy density systems to lower energy density systems. [1, 2, 5, 6, and 7]
Heat is energy in transit, i.e. in the process of being transferred from a system to another system. For this cause, we conclude that heat is a process function (a process, not a state of a system). [1, 2, 5, 6, and 7]
However, heat can be transferred from one system to another system by three mechanisms, conduction, convection and radiation.
When heat is transferred by radiation, we refer to it as thermal radiation or dynamic energy.
In the system surface-atmosphere, heat is transferred by the three mechanisms of heat transfer ; nevertheless, we will only refer to thermal radiation on this section.
Thermal efficiency coefficient is the ratio at which heat can be converted into work [1, 2, and 5].
Heat and work are irreversible processes in the real world. For this reason, the thermal efficiency coefficient (ε) of thermal radiation cannot be higher than 0.5.
The temperature of soil (dry clay) in a pot, whose surface is 1 m2 and whose volume is 1 m3, at 16:30 hrs (CST) was 295.25 K, while the temperature of the atmosphere was 297.35 K. The thermal efficiency from the atmosphere to the soils was:
ε = (Thigh- Tlow)/Thigh
ε = (297.35 K - 295.25 K)/(297.35 K)= (2.1 K)/(297.35 K)= 0.0071
Another equation to calculate the thermal efficiency coefficient is as follows:
ε =1-((295.25 K)/(297.35 K)) = 1 - 0.99294 = 0.0071
From this example we see that thermal radiation transfer happens from the atmosphere to the pot with an efficiency of 0.0071, or 0.71%. This means that the thermal radiation from the atmosphere converted into usable thermal potential energy or any other form of usable thermal energy which is stored by dry clay in the pot is 0.71%.
Thermal radiation is absolutely dependent on temperature; therefore, the thermal radiation emitted by the atmosphere and absorbed by the pot would be:
q/A = 0.201 * ((5.6697 x 10^(-8) )(W/(m^2 K^4 )) )* ((295.25 K)^4– (297.35 K^4 ) )= -2.49 W/m^2
The minus sign means that the transfer of heat happens from the surroundings to the surface because the atmosphere is warmer than the dry clay in the pot. 0.201 is the average emittancy of the atmosphere. Consequently, the thermal radiation from the air to the dry clay in the pot is 2.49 W/m2.
Given that clay has an absorptivity limit, which is around 0.65, the absorbed thermal radiation from the atmosphere is 2.49 W/m2 * 0.65 = 1.62 W/m2.
From 1.62 W/m2, the thermal radiation convertible to work, i.e. usable thermal energy is (1.62 W/m^2 ) * 0.0071 = 0.0115 W/m^2
Solving for q:
q = 1 m^2 (0.0115 W/m^2 )= 0.0115 W
As the total process takes one second, the energy implied in the process is:
E =(0.0115 W * 1 s)(J/(W*s))= (0.0115 (W s) J)/((W s) )= 0.0115 J
And the change of temperature of dry clay caused by 0.0115 W is:
ΔTclay=E/(m*Cp)=((0.0115 J))/((1.2 kg * 1000 J/kg K)) = 9.6 x 10(-6) K
Evidently, the effect of downward radiation is negligible.
THERMAL RADIATION BETWEEN THE ATMOSPHERE AND THE SURFACE
On this topic, we will consider only the lower troposphere layer conformed by the volume of air with length of 3 km altitude.
From calculations of global energy budget, we found that the solar power absorbed by the atmosphere was 136.53 W/m^2.
To this amount of solar power, which is transformed into static energy, we add the thermal radiation emitted from the surface which is absorbed by the air, the convective power flux from the surface to the atmosphere, the latent heat of evaporation from the surface and the thermal radiation emitted from the surface which is absorbed by the atmosphere. We obtain the following total amount of power flux:
Qatmos = 136.53 W/m^2 + 28.8 W/m^2 + 115.2 W/m^2 + 33.6 W/m^2 = 314.13 W/m^2
Given that the total amount of thermal energy contained by the surface is 240 W/m^2, we find that the atmosphere contains a higher amount of thermal energy than the surface. However, not all this energy is radiated towards the surface, but only 67% from it because the main part is transformed into unusable stationary energy. So that, the total amount emitted from the atmosphere towards the surface is:
314.13 W/m^2 * 0.67 = 63.14 W/m^2
This value coincides with the measurements taken during my first experiment on downward radiation from the atmosphere towards the surface.
From this amount, the surface absorbs 44.2 W/m^2, which would cause a change of temperature of the surface of 0.000025 K.
Nevertheless, the correct procedure to calculate the net rate of thermal radiation exchange between the surface and the atmosphere is by using the following formula:
Qnet/A = (σ (Tw^4 – Tc^4)) / ((1/ϵw) + (1/ϵc) - 1)
For the case of a surface at 310 K and an atmosphere at 298 K, the net rate of thermal radiation exchange is:
Qnet/A = ((5.6697 x 10^-8 W/m^2 K^4) * ((310 K)^4 – (298 K)^4)) / ((1/0.65) + (1/0.201) - 1)
Qnet/A = (76.5 W/m^2)/5.51 = 13.87 W/m^2.
13.87 W/m^2 is the amount of thermal radiation exchanged by the atmosphere and the surface in both directions, under such conditions of temperature, from and to surface areas of one square meter, each second.
13.87 W/m^2 would cause a change of temperature of the surface of 0.000008 K, while the change of temperature of the air for this loss of energy would be -0.24 K.
Complexity appears as there are more than two surfaces partially or totally facing whether it be towards the surface or towards the atmosphere. In such cases we have to make use of another formula integrating the flux from each one of the surfaces. At any rate, the emitter will be always the warmer surface and the absorbers will be always the cooler surfaces.
Dampened clay shows a particularity because its temperature remains constant, at least during nighttime. On this situation, the net rate of thermal radiation flux is inversely proportional to the temperature of the atmosphere. If temperature of clay decreases with time, the correlation is proportional, but not linear.
The following graph shows the conditions of dampened clay (Figure 1):
September 1, 2011 |
- Education and Science»
- Astronomy & Space Exploration
The Apollo Programme
In 1961 President John F. Kennedy announced that the United States would land a man on the Moon before 1970. The method chosen by the National Aeronautics and Space Administration was known as the "lunar-orbit rendezvous" method.
A Saturn V rocket would launch the three-man Apollo spacecraft, weighing about 45,000 kilograms, on course to the Moon. The spacecraft was built in three sections : command module,service module and lunar module. After the 2day journey to the Moon, the spacecraft would be put in lunar orbit. The lunar module, with two astronauts aboard, would then "undock" from the mother ship and descend to the surface of the Moon, leaving the third man in orbit in the command module. After landing, the astronauts would make scientific observations and collect samples. They would then take off from the Moon in the top half, or "ascent stage" of the lunar module and rejoin the orbiting command module. After "re-docking" the astronauts would discard the lunar module and blast out of orbit for the return to Earth. Only the conical-shaped command module was designed to return to Earth for a "splashdown" landing in the sea.
The programme received a setback in 1967 when fire broke out in an Apollo spacecraft during ground tests, killing three astronauts working inside. Design changes were made and a number of unmanned Apollo craft were launched before Apollo 7 made the first manned flight in Earth orbit in 1968. Later that year Apollo 8 flew round the Moon, making ten Moon orbits before returning to Earth. In March 1969 Apollo 9 tested the lunar module in space for the first time and in May 1969 Apollo 10 made 31 Moon orbits. Two astronauts descended in the lunar module to within 15 kilometres of the surface.
The climax of the programme came in July 1969 with the Moon landing of Apollo 11. Neil Armstrong and Edwin Aldrin made a safe touchdown in the Sea of Tranquillity. In November came a second landing by Charles Conrad and Alan Bean in Apollo 12.
Disaster almost overtook Apollo 13 in April 1970. The spacecraft was damaged more than 320,000 kilometres from Earth and had to fly round the Moon and return without attempting a landing.
Apollo 14, with Alan Shepard and Edgar Mitchell, made a successful Moon landing in January 1971. In July 1971, Apollo 15 landed David Scott and Jim Irwin with the first lunar rover vehicle. The last two Apollo Moon flights took place in'197 2. John Young and Charles Duke landed in Apollo 16 in April and Eugene Cernan and Harrison Schmitt in Apollo 17 in December.
This brought the Apollo programme to an end, although some of the equipment was later used in the Apollo-Soyuz flight (with Soviet Union Cosmonauts). |
Researchers think they've found a low-cost machine for producing materials that can convert waste heat into electricity: the microwave oven.
A team from the Rensselaer Polytechnic Institute last week published a paper in Nature Materials that describes an improved method for making thermoelectric materials. Members of the team have also created a startup company to commercialize the technology.
Thermoelectric devices are already in use today in portable coolers or to heat car seats, either by making electricity from heat or using electric power for cooling. Many scientists and engineers are trying to improve the efficiency of these devices and to bring down their costs, which would open them more applications such as refrigerators with no moving parts or producing electricity from the heat given off by the exhaust pipes on cars or industrial plants.
The Rensselaer Polytechnic group used a combination of nanostructuring and doping traditional thermoelectric materials with tiny amounts sulfur. Then in the lab, they used an everyday $40 microwave to heat the material, which brings about a desirable structure and properties in a few minutes.
Using microwaves during production is significant because it means the process could be scaled up at low cost using industrial-scale microwave ovens, said Ganpati Ramanath, a professor at the Department of Materials Science and Engineering at Rensselaer and co-author of the paper. "We can make gram quantities in less than a minute so that's very good for industrial scale production," he said.
Ramanath and fellow researchers have applied for a patent for their nanomaterial production method and have form a startup company called ThermoAura to commercialize their academic work.
In addition to government sources, such as the Department of Energy and the National Science Foundation, IBM also had a hand in funding the research. One of the possible applications is to use thermoelectric devices to draw away heat from computers and convert it to electricity. |
The Government of Ancient Egypt
The government of Ancient Egypt depended on two important factors; the pharaoh and agriculture. The Pharaoh was a vital part of the the Egyptian government and he appointed the other officials during most periods. The highest officials took their orders directly from the king. Agriculture was the foundation of Egypt's economy and government.
History of Ancient Egypt's Government
Before the Old Kingdom
Scholars have found few government records from before the Old Kingdom Period. Evidence shows that Egypt was a united kingdom with a single ruler, which indicates that the first pharaohs must have set up a form of central government and established an economic system.
Before the Persian Period, the Egyptian economy was a barter system and not monetary. People paid taxes to the government in the form of crops, livestock, jewelry or precious stones. In return, the government maintained peace in the land, saved food in case of famine and conducted public works.
© Internet Archive Book Images - Barter trade system
The Old Kingdom
Ancient Egypt's government became more centralized during the Old Kingdom. Building large stone pyramids meant the pharaoh had to make changes to the government. Pharaohs from Dynasties Three and Four maintained a strong central government and they had almost absolute power.
Earlier pharaohs created a strong government that allowed them to summon large work forces. They appointed their high officials, and they chose members of their family. These men were loyal to the pharaoh. The government then let the pharaoh gather and distribute enough food to support huge numbers of workers, which allowed them to build large stone pyramids.
© Bruno Girin - The famous pyramids at Giza
During Dynasties Five and Six, the pharaoh's power lessened. Government positions had become hereditary and the district governors, called nomarchs, grew powerful. By the end of the Old Kingdom, nomarchs were ruling their nomes (districts) without the oversight of the pharaoh. When the pharaohs lost control of the nomes, the central government collapsed.
The Intermediate Periods
Modern scholars place three Intermediate Periods into the timeline of Ancient Egypt's history. The Old, Middle and New Kingdoms were each followed by an intermediate periods. All three of these had unique characteristics, but they have two common features. Each represents a time when Egypt was not unified, and there was no centralized government.
The Middle Kingdom
The Old Kingdom's government served as a base for the Middle Kingdom's. The pharaoh made changes, including the addition of more officials. Titles and duties were more specific which limited each official's sphere of influence.
The central government became more involved in the nomes and had more control of individual people and what they paid in taxes. The pharaoh tried to limit the power of the nomarchs. He appointed officials to oversee their activity and he weakened the nomes by making towns the basic unit of the government. The mayors of individual towns became powerful.
The increase in government officials led to the growth of the middle-class bureaucracy.
Officials based taxes on an assessment of cultivable land and the flooding of the Nile. During periods of low flooding, officials reduced taxes, while the government levied a poll tax on each citizen, which they paid in produce or craft goods.
© Internet Archive Book Images - People paid taxes in produce
The New Kingdom
The pharaohs of the New Kingdom continued to build their government on the foundations of earlier governments. One change they made was a decrease in the land area of nomes and an increase in their number. During this period, the pharaohs created a standing army and created military positions. Before this, the pharaohs formed armies using conscripted people.
The 19th Dynasty saw the beginning of a break-up in the legal system. Before this dynasty, government appointed judges made decisions based on evidence presented to them. During this period, however, people began obtaining verdicts from oracles. Priests read a list of suspects to the state god's image, and the statue indicated the guilty party. This change represented an increase in the priesthood's political power. It was open to corruption.
After the New Kingdom
During the Late Period, the pharaohs reunited Egypt and centralized the government. When Persia conquered Egypt, the new rulers established a monetary economy. The Persian monarchs made Egypt a satrapy, and appointed a governor to rule. The regional administrative system was kept in place. The Greek and Roman Empires later imposed their governmental systems on Egypt, also keeping some aspects of Egypt's regional government.
Ancient Egypt Government Officials
Egypt had many different government officials. Some operated at national level, while others were regional.
The vizier was the most important person after the pharaoh. Each pharaoh appointed his/her vizier, who oversaw the judiciary system and the government administration. The vizier sat in the high court, which handled serious legal cases, often involving capital punishment. Egypt usually had one vizier; sometimes there were two, who oversaw either Upper or Lower Egypt.
Another important position was the chief treasurer. He was responsible for collecting and assessing taxes. The treasurer also monitored the redistribution of the items brought in through taxes. He had other officials under his command, who helped collect taxes and keep tax records.
Some periods also had a general. He was responsible for organizing and training the army. Either the general or the pharaoh led the army into battle. Sometimes, the crown prince served as the general before ascending to the throne.
Overseer was a common title in the Ancient Egyptian government. They managed work sites, like the pyramids, and some also watched over granaries and monitored their contents.
Scribes formed the basis of the Egyptian government. They wrote official documents and could move to higher positions.
Ancient Egypt Government Documents
A lot of the information scholars have about Egypt's government comes from tomb inscriptions. Government officials either built their own tombs or the pharaoh gave them one. Their tombs included inscriptions detailing their titles and some events from their lives. As an example, one official's tomb had a description of a time he greeted a foreign trade embassy for the pharaoh.
© Clio20 - Stela of Minnakht, chief of scribes
During the New Kingdom, some pharaohs gave their officials tombs, which helps identify those who served specific pharaohs. They also reveal changes in the government's high officials. Many pharaohs appointed officials from the bureaucracy, and some appointed men who had served in the military.
Scholars have also found law documents, including detailed cases of tomb raiders. They mention the steps the government took to punish them and try to prevent further raiding.
High officials sealed documents detailing property transfers. They maintained control of any property that was brought into a marriage, even if there was a divorce. Both men and women could file for divorce, though it was easier for a man to obtain it. In the event of a divorce, the man had to compensate the woman and the government insured the people followed these rules.
Government in Thebes
Egypt's central government moved when the pharaoh changed his/her capital. The central officials worked out of the royal compound. Thebes served as a government and religious capital for centuries.
When Thebes was Egypt's capital, the mayor of Thebes held a position of power.
© Vyacheslav Argenberg - Thebes
Certain high officials were buried in the Valley of the Kings, yielding a few significant aspects, such as the position they held and whom they served. Moreover, there are mentions of honors granted by the pharaoh, who certainly valued the official, given that he was granted a tomb in the royal cemetery.
The pharaoh sometimes had a funerary temple built for one of his officials in the Theban Necropolis. They also granted favored officials land revenues to provide goods for their funerary cult.
Ancient Egypt Government Facts
- The pharaoh was the ultimate authority in Ancient Egypt.
- The vizier was the most powerful government official.
- Viziers were second only to the pharaoh in power.
- Egypt was divided into nomes, and a nomarch governed each one.
- People paid taxes with agriculture produce or precious materials.
- The government stored food and distributed it to workers or to the people in times of famine.
- The government ran building projects, like the pyramids. |
perspective, in art, any method employed to represent three-dimensional space on a flat surface or in relief sculpture. Although many periods in art showed some progressive diminution of objects seen in depth, linear perspective, in the modern sense, was probably first formulated in 15th-century Florence by the architects Brunelleschi and Alberti. Brunelleschi designed (c.1420) two panels depicting architectural views of Florence, in which he constructed a mathematically proportioned system of perspective. Alberti, in his De pittura (1435), harnessed the technique of perspective to the theory that painting is an imitation of reality. He viewed the picture plane as a window through which one looks at the visible world. Objects in the picture were to be systematically foreshortened as they receded into the distance. Orthogonal lines converged to a single vanishing point, which was to correspond to the fixed viewpoint of the spectator. Reflecting the growth of humanism, the spectator played a new role in art, as man was to determine the measurement of all things. The Italian artists who experimented with perspective, including Donatello, Masaccio, Uccello, and Piero della Francesca, sometimes diverged from the rules for a greater artistic effect. In general, however, the 15th-century Italian artists tended to work within a geometrical system, whereas the contemporary Flemish painters used more empirical means to achieve a convincing delineation of space. The technique of linear perspective had an immense influence on the development of Western art. In the 20th cent., however, its use has considerably declined, since many artists have rebelled against the conception of art as a mirror image of reality. Aerial or atmospheric perspective was developed primarily by Leonardo da Vinci. In general, it is based on the perception that contrasts of color and of light and dark appear greater, and contours more defined, in near objects than in far. Aerial perspective takes note of the recessive character of cool colors and the prominence of warm colors. In East Asian art, perspective effects were achieved by the atmospheric method, often incorporating zones of mist to separate near and far space.
See R. V. Cole, Perspective for Artists (1976); J. Cody, Atlas of Foreshortening (1984); M. Kubovy, The Psychology of Perspective and Renaissance Art (1988).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on perspective from Fact Monster:
See more Encyclopedia articles on: Art: General |
Supernova 1987a stands as the best-studied explosion of its type. Thanks to its location in the nearby galaxy known as the Large Magellanic Cloud (LMC), astronomers have been observing it starting with the first moments after the explosion. As a result, its remains provide some of the best tests of our ideas about supernova explosions, which produce and distribute many of the heavy elements required for planet formation and life.
But a number of the elements typically synthesized in supernovae are unstable and decay over the months and years that follow, filling the debris remnant with energy and keeping it bright long after the energy from the initial explosion has dissipated. In particular, the decay of a radioactive isotope of titanium produced generated in supernova remnants is thought to be responsible for much of the optical, infrared, and ultraviolet light astronomers record. New X-ray observations have now revealed the decay of titanium in a supernova directly, and found it to be sufficient to power much of the emission in the years immediately following the explosion.
S. A. Grebenev, A. A. Lutovinov, S. S. Tsygankov, and C. Winkler also found there was more titanium in the supernova remnant SNR 1987a than expected from theory. Though the range of possible values overlapped the theoretical maximum, the authors suggested it may be necessary to reexamine details in the physical models of nuclear fusion during supernova explosions. These results provided the best data so far for the physical conditions in supernova remnants, including how they produce and disperse heavier elements into interstellar space.
While much of the Universe's hydrogen, helium, and lithium were produced in the earliest minutes after the Big Bang, all other elements were birthed by stars. Some of these elements were fused during the star's life, but others were forged during explosions of the heaviest stars (those more than 8 times the mass of the Sun). These explosions provided many of the raw ingredients for planets like Earth and later generations of stars.
In the first few years after the explosion, most of the light in a supernova comes from radioactive cobalt (in the isotopes 56Co and 57Co). These decay into an isotope of titanium (44Ti), which itself is unstable, decaying into lighter elements over time. This decay process keeps the supernova remnant hot, and powers the emission of much of the ultraviolet, visible, and infrared light. This keeps the material bright until years pass and the expanding shells of ejected matter collide with gas in interstellar space. This collision can ultimately end up outshining the remnant itself.
The decay of 44Ti produces high-energy X-ray photons at three distinct wavelengths. The researchers in the current study aimed the INTEGRAL (INTErnational Gamma-RAy Laboratory) satellite at SNR 1987a for about 4.5 million seconds (a total of over seven weeks) to obtain clear X-ray spectra. This process was complicated by the presence of a pulsar and a black hole binary system that, from our perspective, appear near SNR 1987a in the sky—these bodies also emit X-ray light. The astronomers identified the telltale spectral signature of titanium decay, and extrapolated from the number of photons (the flux) to determine the mass of the titanium before the decay process began.
The mass actually exceeded the amount predicted from theory, though the range of possible values in their estimates include the maximum expected value based on predictions. This was a little surprising, since the only other supernova remnant with measured titanium emission (Cassiopeia A) showed far less. However, supernova 1987a was exceptional: the progenitor star was a massive blue star, as opposed to the red stars that comprise most supernovas. As a result, its explosion could have followed a slightly different path than theory predicted; the authors suggested investigating this possibility.
Supernova explosions are physically complicated systems, both from a theoretical and observational point of view. Detailed observations such as this allow astronomers to refine their models, to understand how chemical elements are synthesized and dispersed as stars die. |
What Exactly Is Body Language?
Body language is communication without words. It is the movement of facial features or body parts, which intentionally or unintentionally express thoughts and attitudes. Here are three key types:
Facial expressions: A smile, a slight frown or a straight face are all different expressions that add another layer of meaning to what you are saying. Eye contact is an especially significant part of body language that you need to pay attention to while speaking English or listening to someone else.
Hand gestures: When you talk, do you move your hands around or do you keep them at your side? Folded arms, hands on hips or hands in pockets can create different messages even if you are saying the same thing.
Body position: The position of your body also means a lot. Leaning forward while somebody is speaking or how far you stand apart from your audience—it all matters.
Why Is Body Language Important for Learning German?
In his renowned research on nonverbal communication, UCLA Professor Albert Mehrabian concluded that communication consists of three separate elements: words, tone of voice and body language.
He researched how people communicate feelings and attitudes, and found that only seven percent of that communication comes from words. Meanwhile, 38 percent of messages are communicated by tone of voice, and 55 percent of messages are communicated by body language.
If body language accounts for more than half of your communication, then you need to start learning and improving your facial expressions, hand gestures and body positions today! Learning body language will help you express yourself better and understand others.
How to Use Body Language in German Communication
Learn This Body Language to Avoid Misunderstandings
If everyone said what they truly meant, it would be much easier to communicate in German (and any language!). Unfortunately, that is not always the case.
Sometimes people fail to express themselves clearly with words. Sometimes people intentionally say the opposite of what they mean.
Therefore, learning a few common body gestures that indicate a contradiction, sarcasm or confusion can be helpful to avoiding misunderstandings.
Most people roll their eyes to show disapproval or annoyance.
People often make this gesture, where they move their index fingers and middle fingers of both hands up and down. They do this to stress a word or phrase, mainly because they do not think it is the right word to use in that situation. Air quotes usually connote sarcasm.
Arms crossed defensively
If someone crosses his/her arms, it often means that he/she disagrees with what is being said.
For example, imagine you are arguing with a colleague because you think the team should do a task differently. He says, “I hear the basis of your arguments,” but his arms are crossed over his chest. Despite his words, he probably does not agree with your idea at all.
Head shaking typically indicates disagreement or disappointment.
Often, you might ask someone a question and instead of answering with words, he/she will simply shake his/her head back and forth. That means, “no.”
Learn This Body Language to Ensure That Others Understand You
Sometimes, you feel like people do not pay enough attention to what you are saying. Your listeners might be distracted by a notification on their phones or something else on their minds.
It is useful to be able to read body language that indicates confusion and distraction. Here are some examples:
Avoiding eye contact
The frequency and intensity of eye contact depends on a person’s cultural background and personality. However a person often holds eye contact when he/she converses with others. If you detect a lack of gaze, it could mean that your listener is:
No longer able to follow the conversation
If someone is avoiding eye contact with you, just smile and check in with them verbally. Some common expressions to do this include:
Am I making sense? (Am I speaking in a logical/understandable way?)
Are you still with me? (Are you following/understanding what I am saying?)
Are we on the same page? (Are we in agreement/understanding each other?)
Is anything unclear? (Is anything about what I am saying confusing?)
Scratching face/rubbing nose
If you are explaining a new project to your teammate and he keeps scratching his face or chin, it is likely that he does not fully understand. He is confused.
When you see such body language, you can use one of the expressions above to draw attention and encourage your listener to seek clarification.
Resting head in hands/playing with hair
Both of these gestures indicate that someone is bored and distracted. If you are telling your friend about your weekend biking trip, but she keeps playing with her hair, she probably has something else on her mind, like what to cook for dinner.
Learn This Body Language to Appear Confident While Speaking German
Understanding body language also helps when you talk. You can show your confidence not only with words, but also with the right body language.
If you remember from above, shaking one’s head shows disagreement. Nodding (moving your head up and down) is the opposite. It expresses that you agree with someone.
Therefore, make sure to nod your head when you say, “that is an excellent idea” to show your friend that you genuinely agree with her.
A smile makes you appear friendly and encourages others to open up to you. When you ask for feedback about your project, add a smile to this question: “What do you think of it?”
You will come across as being confident about your work and willing to hear any feedback.
Gesturing with hands
Move your hands widely and decisively, and you will show others your ownership of the space and the topic.
For example, you can extend your arms to the sides and turn the palms up, moving them slightly left and right in sync with the rhythm of your speech. This movement is particularly helpful if you are presenting something to your team. This video provides a great demonstration.
However, avoid doing it too extensively as it might distract from the content of your presentation.
Standing/sitting up straight
It is essential to stand or sit up straight. It makes you look taller and seem more important. So make sure you stand up straight when you introduce yourself with this sentence, for example:
“My name is Lila and I am the new marketing assistant.”
An open stance in your shoulders and arms indicates that you are open to suggestions, ideas and even constructive feedback. Do not crouch or bend—this makes you look insecure.
Make sure to stand with an open position when you search for input from someone with a question like, “What do you think we can do better?”
it with the original. |
Present-day Mars features deep canyons, mountains that would dwarf Mount Everest and the largest volcanoes in the entire solar system. Three and a half billion years ago, it was also home to a vast ocean fed by scores of rivers and lakes, according to a recent report.
Three and a half billion years ago, right when life was first forming on Earth, was Mars home to a vast ocean swimming with alien fish? This scenario may have been possible, according to scientists from the University of Colorado at Boulder. In a report published in the journal Nature Geoscience, they suggest that a massive sea with an average depth of 1,800 feet once covered more than a third of the red planet’s surface.
The team arrived at this conclusion after studying the planet’s numerous delta deposits and river valleys, using topographical data from various orbiting missions. They also determined that ancient Mars may have had an Earth-like hydrological cycle, including cloud formation, precipitation and groundwater accumulation.
For years, scientists have been debating whether Mars once had liquid water on its surface, a prerequisite for living organisms to grow and survive. Present-day Mars has two permanent polar ice caps and frozen water beneath its permafrost, but its temperature and atmospheric pressure are too low for water to exist in liquid form. High-resolution photographs, however, have revealed features that are consistent with liquid water, including gullies, channels and lake basins.
The University of Colorado team’s paper appeared on the heels of another report supporting the theory that liquid water existed on Mars several billion years ago. According to NASA scientists, rocks collected by the Mars rover Spirit in 2005 were found to contain high concentrations of carbonate. These findings, which were published in the journal Science, indicate that Mars once had a wet, non-acidic environment that may have been favorable for life.
If the primordial ocean hypothesis is correct, where did all the Martian water go? The authors of the University of Colorado study hope that further exploration of Mars, including the Mars Atmosphere and Volatile Evolution Mission (MAVEN) in 2013, will provide new clues. |
Tycho in 60 Seconds
Narrator (April Hobart, CXC): Over four hundred years ago, the Danish astronomer Tycho Brahe studied the explosion of a star that later became known as Tycho's supernova. A look at Tycho in X-rays by NASA's Chandra X-ray Observatory shows that the supernova remnant contains an expanding bubble of superheated debris, which sits within an even more rapidly moving shell of extremely high-energy electrons. A very long Chandra observation of Tycho totaling about a million seconds of time, has uncovered new and unexpected structures in this aftermath of the star’s explosion. A series of stripes in the remnant provides novel evidence for particles that have been accelerated to extremely high energies. This is an important clue to better understanding the object that Tycho Brahe first saw back in 1572. |
The land within the boundaries of the United States—covering nearly 2.3 billion acres—provides food, fiber, and shelter for all Americans, as well as terrestrial habitat for many other species.
- Land is the source of most extractable resources, such as minerals and petroleum.
- Land produces renewable resources and commodities including livestock, vegetables, fruit, grain, and timber.
- Land supports residential, industrial, commercial, transportation, and other uses.
- Land, and the ecosystems it is part of, provide services such as trapping chemicals as they move through soil, storing and breaking down chemicals and wastes, and filtering and storing water.
The use of land, what is applied to or released on it, and its condition change constantly: there are changes in the types and amounts of resources that are extracted, the distribution and nature of land cover types, the amounts and types of chemicals used and wastes managed, and perceptions of the land's value.
While human activities on land (including food and fiber production, land development, manufacturing, and resource extraction) provide multiple economic, social, and environmental benefits to communities, they can also involve the creation, use, or release of chemicals and pollutants that can affect the environment and human health.
EPA works with other federal agencies, states, and partners to protect land resources, ecosystems, environmental processes, and uses of land through regulation of chemicals, waste, and pollutants, and through cleanup and restoration of contaminated lands.
The complex responsibilities of land management underscore the challenges of collecting data and assessing trends on the state of land. Numerous agencies and individuals have responsibilities for managing and protecting land in the United States. Responsibilities may include protecting resources associated with land (e.g., timber, minerals) and/or land uses (e.g., wilderness designations, regulatory controls).
- Approximately 40 percent of the nation is owned or managed by public agencies.1 The other 60 percent is managed by private owners under a variety of federal, state, and local laws.
- The largest owners of public land at the federal level are the Bureau of Land Management, the U.S. Forest Service, the National Park Service, the U.S. Fish and Wildlife Service, and the U.S. Department of Defense.
- Local governments have primary responsibilities for regulating land use, while state and federal agencies regulate chemicals and waste that are frequently used on, stored on, or released to land.
ROE indicators are presented to address five fundamental questions about the state of the nation's land:
- What are the trends in land cover and their effects on human health and the environment? "Land cover" refers to the actual or physical presence of vegetation or other materials (e.g., rock, snow, buildings) on the surface of the land. It is important from the perspective of understanding land as a resource and its ability to support humans and other species. Changes in land cover can affect other media (e.g., air and water).
- What are the trends in land use and their effects on human health and the environment? "Land use” refers to the economic and cultural activities practiced by humans on land. Land use can have effects on both human health and the environment, particularly as land is urbanized or used for agricultural purposes.
- What are the trends in chemicals used on the land and their effects on human health and the environment? Various chemicals (e.g., pesticides, fertilizers, and toxic chemicals) are applied or released to land for many purposes. The quantity and diversity of chemicals and the potential for interactions among them create challenges in understanding the full effects of their use.
- What are the trends in wastes and their effects on human health and the environment? Numerous types of waste are generated as part of most human activities. Trends in wastes include trends in types and quantities of waste, and mechanisms for managing wastes. Waste trends reflect the efficiency of use (and reuse) of materials and the potential for land contamination.
- What are the trends in contaminated land and their effects on human health and the environment? Contaminated lands are lands affected by human activities or natural events (such as manufacturing, mining, waste disposal, volcanoes, or floods) that pose a concern to human health or the environment. |
The flashcards below were created by user
on FreezingBlue Flashcards.
what is a hypothesis? and what is its format?
- * A tentative and testable prediction about how changes in one thing is expected to explain and be accompanied by changes in something else.
- * The IF will HAPPEN when ALTER something statement
what are the strengths of qualitative research?
- * Research methods that emphasize depth of understanding and the deeper meanings of human experience, and that aim to generate theoretically richer, albeit more tentative, observations.
- * Commonly used qualitative methods include participant observations, direct observation, and unstructured or intensive interviewing.
What is Evidence-based practice?
- using evidence in practice- it is a process in which practitioners make practice decisions in light of the best research evidence available.
- * Involves evaluating the outcomes of practice decisions.
Ethical guidelines like
- * Voluntary participation
- * Informed consent- participants know what they are agreeing too.
- * No harm to the participant- unless knowingly and willingly accept risks of harm
- * No Deceiving subjects
- * Long term benefits outweigh violation of certain ethical norms
what is a paradigm?
- A model or frame of reference that shapes our observations and understandings.
- It interprets the world
- o For example, �functionalism� leads us to examine society in terms of the functions served by its constituent parts, whereas �interactionism� leads us to focus attention on the ways people deal with each other face-to-face and arrive at shared meanings for things.
what does functionalism paradigm say?
looks at everything as having a function.
What does interactionism paradigm say?
focuses on people's interaction face-to-face.
what does postmodernism?
completely impossible for anyone to be completely objective/ everything is subjective/ no objective truth.
what does Contemporary positivism paradigm say?
cannot be completely objective; there is objective truth to situations.
what does the Interpretivism paradigm say?
gain empathetic understanding of how people feel inside- deeper meanings of feelings
what does the critical Social Science paradigm say?
o focuses on oppression and competition between groups
what is deductive Approach relating to theory?
(top down) - start with known theory and create hypothesis
what is a theory:
* A systematic set of interrelated statements intended to explain some aspect of social life or enrich our sense of how people conduct and find meaning in their daily lives
what is inductive Approach related to theory?
- (bottom up) start with observations, create and test hypothesis then develop theory
- * Example: use of child theory and play therapy to allow a child to play with figures in a sand tray.
what is the definition of a concept?
* A mental image that symbolizes an idea, an object, an event, or a person.
what are the four types of non-probability sampling?
- 1. Accidental
- 2. Snowball
- 3. Quota
- 4. judegmental
what is the definition of sampling?
* A sampling method aimed at ensuring that enough cases of certain minority groups are selected to allow for subgroup comparisons within each of those minority groups.
define Quota sampling:
- a) A type of nonprobability sample
- units are selected into the sample on the basis of prespecified characteristics so that the total sample will have the same distribution of characteristics as are assumed to exist in the population being studied.
what are the advantage and limitation of quota sampling?
- i) Advantage: Convenience and more representative than accidental sampling
- ii) Limitations: Stratification is only limited to few variables and thus may not be representative of population
define Deviant case sampling:
- a) A type of nonprobability sampling in which cases selected for observation are those that are not thought to fit the regular pattern.
- i) For example, the deviant cases might exhibit a much greater or lesser extent of something.
what are the types of probablility sample?
- 1. stratified random sample
- 2. simple random sample
- 3. cluster smaple
define Stratified sampling:
- a) A probability sampling procedure
- uses stratification (breaking into groups then randomly selecting) to ensure that appropriate numbers of elements are drawn from homogeneous subsets of that population.
what are the advantages to stratified sampling?
- i) Advantages: Ensures that diverse elements of the population are included
- ii) Limitations: Must know where each element will fall into the strata
define Snowball sampling:
a) A nonprobability sample that is obtained by asking each person interviewed to suggest additional people for interviewing.
what are the advantages and limitations of snowball sampling?
- i) Advantages: Useful when potential subjects are hard to locate
- ii) Limitations: Favors the opinions of its starting point. Very easy to miss other networks
define Cluster sampling:
a) A multistage sampling procedure in which natural groups (clusters) are sampled initially, with the members of each selected group being sub sampled afterward.
what are the advantages and limitations of cluster sampling?
- i) Advantages: Good for larger studies, as it makes it more manageable
- ii) Limitations: Can be problematic if random selection is not used
define Simple random sampling:
a) probability sample in which the units that compose a population are assigned numbers. A set of random numbers is then generated, and the units having those numbers are included in the sample.
what are the advantages and limitations to simple random sampling?
- i) Advantages: very systematic, ensures that every element has the same chance of being chosen
- ii) Limitations: Can be limited by the sampling frame and minority elements may be under represented
define Systematic sampling:
a) probability sample in which every kth unit in a list is selected for inclusion in the sample.
define Disproportionate stratified sampling:
a) A sampling method aimed at ensuring that enough cases of certain minority groups are selected to allow for subgroup comparisons within each of those minority groups.
define Contingency questions:
- a) A survey question that is to be asked of only some of the respondents, depending on their response to some other questions.
- i) For example, all respondents might be asked whether they belong to the KKK, and only those who said yes would be asked how often they go to meetings. The latter would be a contingency question.
define Judgmental sample:
- a) nonprobability sample in which we select the units to be observed on the basis of our own judgment about which ones will be the most useful or representative.
- b) Another name for this is purposive sample.
what are the advantages and limitation to judgmental sampling?
- i) Advantages: Useful in exploratory studies, developing theory
- ii) Limitations: Bias exists, No way to know extent of representativeness
define Accidental sampling:
- a) A sampling method that selects elements simply because of their ready availability and convenience.
- b) Frequently used in social work because it is usually less expensive than other methods and because other methods may not be feasible for a particular type of study or population.
what are the advantages and limitations to accidental sampling?
- i) Advantages: Easy and convenient. Would suffice with testing new intervention
- ii) Limitations: No way to know if the sample is typical to the population at interest
definition of Research designs:
* A term often used in connection with whether logical arrangements permit causal inferences; also refers to all the decisions made in planning and conducting research.
define Trend study:
Longitudinal- purpose is to study one characteristic over time
define Descriptive study:
- a. Usually termed �survey�, Examines the distribution of one variable
- b. A system for collecting info to describe, compare, or explain knowledge, attitudes, or behavior
- c. Used when population is too big or disperse to view collectively
- d. Common to explore questions
- e. Sometimes experimental or quasi-experimental
define Longitudinal studies
- monitor a given characteristic of some population over time.
- a. An example would be annual canvasses of schools of social work to identify trends over time in the number of students who specialize in direct practice, generalist practice, and administration and planning.
define Cohort study:
- longdituinal study- Examine more specific subpopulations (cohorts) as they change over time.
- information from different member of the group may be used at each point of collection
purpose of Exploratory study:
a. Gain familiarity with a problem Often qualitative
purpose of Explanatory study:
a. Purpose to explain situation- qualitative
what is a Cross-sectional study:
a. Research studies that examine some phenomenon by taking a cross section of it- at one time and analyzing that cross carefully.
what is a Quasi-experiment:
- a. research Design that attempts to control for threats to internal validity and allow for causal inferences
- different from true experiments primarily by the lack of random assignment of subjects.
definition of an experiment? what three elements do all experiments have (3)
- a. A research method that attempts to provide maximum control for threats to internal validity by:
- 1. Randomly assigning individuals to experimental and control groups,
- 2. Introducing the IV (which typically is a program or intervention method) to the experimental group while withholding it from the control group
- 3. Comparing the amount of experimental and control group change on the DV
what is a Case study:
a. An idiographic examination of a single individual, family, group, organization, community, or society using a full variety of evidence regarding that case.
a. quasi-experimental designs- multiple observations of a dependent variable are conducted before and after an intervention is introduced.
- what are the Rules for writing survey:(8)
- o Clear instructions, no double barreled questions, short, overall very neutral, purposeful order, pre-test survey, exhaustive- all options available, mutually exclusive- can�t belong to two categories
what are the Strengths of surveys: (5)
o Inexpensive, can reach large populations, able to generalize data, able to analyze multiple variables at the same time, high reliability
what are the Weakness of surveys:(4)
o Limited ability to show causality, rely on self-report- not able to measure social behavior, not able to include life context, questionable validity
define Operational definitions:
- * The concrete and specific definition of something in terms of the operations by which observations are to be categorized.
- o Example: �improvement� in school as the concept. Improvement means getting better grades, less times in detention, joining after school club/sports teams
- The overall research process & designing a study
Three Basic Elements of a Good Experiment
- * Random assignment of subjects to experimental and control groups
- * Manipulation of independent variable
- * Control over extraneous variables
what are the steps to the Research Process: (8)
- Choose a problem- look a relevant theories, other research already done,
- Formulate hypothesis- identify IV, DV, extraneous variables
- Select research design- quasi experimental vs experimental
- Select sample- how are you going to get sample? Control vs experimental groups
- Develop instruments of measurement- used established or create own
- Collect data
- Analyze data
- Write the report
- In quantitative studies, the researcher predicts in advance the relationship they expect to find between variables. That prediction is called a hypothesis.
- Most hypotheses predict which variable influences the other, in other words, that which is the cause and that which is the effect.
what is Research Design
- a) Refers to the decisions made in planning and conducting research
- b) Often used in connection with whether logical arrangements permit causal inferences
what is a Sample?
a) People or things actually studied
what is good Measurement?
- a) A single problem can/should have multiple indicators
- 1. Increases reliability/validity
- 2. Allows for triangulation
in the research process what decision need to be made about Data Collection?
- How you will collect the data (experiment vs. survey vs. field research vs. historical)
- when, where and by whom your data will be collected with each instrument.
what is data Analysis
- a) Process of synthesizing raw observances to demonstrate patterns (e.g. statistical tests)
- b) Statistical analysis specifies the statistics to be computed and statistical tests to be used in analyzing the data collected. It determines whether or not the data support the hypothesis.
what are the rules for reporting results?
a) Start with a review of your original question (What was your interest in this area of study?, What, if any, research or theory informed your investigation?)b) Provide a detailed description of your methodology, Design, sample, Interview guide (described in text but attached as appendix), Method of analysis, Describe findings
what are the Purposes for data analysis?
- 1. Data reduction (description)
- 2. Pattern Identification (description & inference)
- 3. Generalizability (inference)
what is important in Qualitative results reporting?
- Organize by themes
- Use verbatim quotes to support interpretations
- discuss the meaning and implications of what you found
o Pretest-Posttest Control Group Design OR Classical Experimental Design:
o Posttest-Only Control Group Design:
o Solomon Four-Group Design:
o Alternative Treatment Design with Pretest:
o Dismantling Studies:
- R O Xab O
- R O Xa O
- R O Xb O
o Nonequivalent Comparison Group Design:
- two existing groups that appear to be similar and measures change before and after an intervention is introduced into one of the groups.
- O X O
- O O
o Time-Series Designs:
- multiple observations of a dependent variable are conducted before and after an intervention in introduced.
- The more observations, the better- more data points help rule out history, maturation, or statistical regression.
- O O O O X O O O
what is a probe?
- qualitative research technique
- * A technique employed in interviewing to solicit a more complete answer to a question, this nondirective phrase or question is used to encourage a respondent to elaborate on an answer
- * It helps encourage the interviewee to give a full a response as possible.
- examples: please tell me more about that. Can you clarify?
what are the Level of measurement:(4)
nominal, ordinal, interval, ratio,
define Nominal level of measure:
measures refer to those variables whose attributes are simple different form one another. An example would be gender
define Interval level of measure:
refer to those variables whose attributes are not only rank ordered by also separated by a uniform distance between them. An example would be IQ.
define Ordinal level or measure
refer to those variables whose attributes may be rank-ordered along some progression for more too less. An example would be the variable prejudice as composed of the attributes very prejudiced somewhere prejudiced and not at all prejudiced.
define Ratio level of measure:
rank order, uniform distance, also based on a true zero are. example age
- * A time-series design used to evaluate the impact of an intervention or a policy change on individual cases or systems.
- * Comparison of baseline and intervention phase
- * AB design A=baseline B= intervention
- o Positives: clear understanding of clients problems, easily apply to practice, client intervention specific
- o Limitations: may not be able to accurately get baseline because of clients needs, not, no time to use because of heavy caseloads
what is Internal validity
confidence that study accurately depicts whether one variable causes another
what are the Threats to internal validity (7) THIS RAM
- o Maturation- passage of time
- o History- other major life events happen that skew causality
- o Testing- clients get used to testing process
- o Instrumentation change- make sure to use same measures at pre and post test
- o Regression to the mean- (statistical regression)- things naturally improve on their own- people in crisis don�t stay in crisis
- o Selection bias- make sure control and experimental groups are comparable
- o Ambiguity about the direction of the causal influence- making sure the IV happened in time before the DV
what is needed to Establish a causal relationships (7)
- * Association- showing the IV and DV are associated (go together) either positively or negatively
- * Time Priority- IV happened BEFORE the DV
- * This happens through logic,manipulation of IV (experiment)
- * Ruling out alternative explanations- dcontrol other variables
- * Sampling- choose not to use participants that may give alt explanations
- * Random Assignment
- * Statistical Analysis- after study is complete
- * Theoretical explanations- if association between variables can be explained by a theory its more likely that there is a causal relationship
what are Attributes of a good research questions
- Mutually exclusive- only fit into one category ex. Age: 0-5, 6-10,11-15
- Exhaustive- all categories are represented ex. Gender: male, female, transgender
Single System Design compare what?
comparison of baseline to intervention phase.
what is Frequency distribution:
- * A description of the number of times the attributes of a variable are observed in a sample.
- Example:The report that 53 % of a sample was men and 47% were women would be a simple example of a frequency distribution.
- o Another example would be the report that 15 of the cities studied had population under 10,000, 23 had populations between 10,000 and 25,000.
what are the 6 Types of statistics that can be calculated with continuous data?
Mean, median, mode, range, variance, standard deviation
what is the definition of Mean:
mathematical average: to compute the mean add the scores of all the caes and divide by the numbers of cases
what is the definition of Median:
The middle- the point on the scale below which lies 50% of the cases and above which lie 50%
what is the definition of Range:
difference between the largest and smallest responses
what is the definition of Mode:
most often- category of the variable OR the interval on the scale that contains the most cases
What is the definition of Variance:
the value repsorents how much the scores vary from the mean.
sum of the squared deviations about the mean, divided by the number of responses.
what is the definition of standard Deviation:
the positive square root of the variance. It is the 'average' difference scores are from the mean.
what is the definition of Reliability:
- that quality of a measurement method that suggests that the same data would have been collected each time in repeated observations of the same phenomenon.
- results in studies r= 0-1 (closer to 1 is best) o For example: In the context of a survey, we would expect that the question �did you attend church last week?� would have higher reliability that the question �about how many times have you attended church in your life?
what are the ways to test reliability?
- Test-retests- If answers are consistent when given the same test at different points
- Split-half (coefficient alpha)- most common. Extent to which different samples of items from an instrument are consistent with the entire set. for example, Correlation of even number responses to odd number of responses
- Parallel Forms Reliability- Two groups, two different versions of same measurement instrument. Same instrument- questions asked in different order.
- Inter-rater Reliability- Making sure the observers, or clinicians delivering treatment are measuring same concept based on same measurement
what is Validity:
- a descriptive term used of a measure that accurately reflects the concept that it�s intended to measure.
- o For example, your IQ would seem a more valid measure of your intelligence than would the number of hours you spending the library. Realize that the ultimate validity measure can never be proven, but we may still agree to its relative validity, content validity, construct validity, internal validation, and external validation.
Validity in measurement tools and scales-
purpose- does the measure reflect the concept intended
What are the types of validity
Face value- Does it appear to measure the intended concept
Content validity- Covers the theoretical domain of concept- group of experts review
Statistical power increases when:
- *large sample sizes are selected,
- * higher significance levels are used, and
- * a stronger relationship in the population is assumed.
what are the characteristics of Good abstracts in written papers
- * Immediately after the title page.
- * Briefly summarizes the study
- * Between 75-150 words
- * Being with a sentence identifying the purpose of the research
- * Next, a sentence or two that summarizes the main features of the research design and methodology.
- * The main findings are then highlighted in one or two sentences followed by a brif mention of any major implications.
what are the types of qualitative questions? (3)
- * Grand tour- General overview of topic
- * Example/story- Respondent provides a personal example or story of specific event
- * Structure or contrast- Respondent is asked to reflect on contrasting situations/responses
What are mail surveys?
* A mailed questionnaire that requires no return envelope
what are Acceptable response rates with mail surveys:
- o 50% adequate
- o 60% good
- o 70% very good
what is the definition of Retrospective baselines
o Looking back and asking client to tell you about their behavior prior to treatment/intervention. When you aren�t able to observe the baseline because of crisis or timing
Correlation v. Causation:
- * In scientific perspective, we can NEVER EVER NEVER establish true causality
- * Association- Showing that the independent and dependent variables �go together� or vary systematically in relation to each other.
- o For example:If one is true, the other is true. - As one goes up, the other goes up (or down)
- * But NOT: they have no relationship
- * Time priority- Showing that independent variable (A) preceded dependent variable (B)
what is Triangulation:
The use of more than one imperfect data-collection alternative in which each option is vulnerable to different potential sources of error
what are the Strengths qualitative research
- * Strengths:
- o Depth and richness of data
- o Research happens in natural setting
- o Ability to understand ongoing social processes (usually longer, able to see change over time)
what are the Weaknesses of qualitative research:
- o Instruments
- o Ethical Concerns
- o Sampling- non-probability
- o Little validity or reliability
what is causal inference:
o Implies that the IV has a causal impact on the DV
What are the 6 characteristics for ensuring rigor?
- 1. Prolonged Engagement - mult interviews or
- lengthy stay in the field
- 2. Triangulation – Having multiple sources of data
- 3. Peer Debriefing - Consult other researchers to
- monitor bias
- 4. Member Checking –Verification of interpretations
- with respondents
- 5. Negative Case Analysis –Search for cases that refute
- 6. Audit Trail –Document everything: steps taken,
- decision points, interpretations, etc.
what part does the IV play in research design?
Independent variable is attribute variable |
Back when I was teaching middle school math (in technically a different century ... that's sad) we used to do a very hands-on activity to teach unit rates. The students would use rulers to measure a partner's facial features, then put the measurements into unit rates to see how close they were to the Greek Golden Ratio. It was a really fun activity, but definitely one that would benefit from a technology update. (Hint: Middle schoolers plus wooden rulers plus classmates' faces are not always a good mix.)
So, I have updated the activity with the use of Google Docs, webcams, and a digital ruler web app. See below for all the details on how the "Golden Ratio Face" project works, as well as access to all the needed templates and resources.
The Golden Ratio was also applied to the human body, and more specifically the human face. For example, the height of your head, when compared to the width of your head, should be about 1.62 to 1. Many other features on the face could be compared the same way.
In this activity students will do the following:
- Get a copy of the "Golden Ratio Face" Google Docs template.
- Add a picture of their face (or someone else's) via their webcam or by inserting an image.
- Use a digital ruler web app to measure 12 different facial features.
- Put those measurements into 7 ratios.
- Convert those ratios into unit rates.
- Find the difference between each one and the Golden Ratio.
- Find the average difference for all 7 unit rates.
- See how close their face (or whichever face was used) is from the Golden Ratio.
Some of the math concepts covered in this activity include:
- Measuring to the nearest tenth of a centimeter
- Creating ratios
- Converting ratios to unit rates
- Subtracting decimals
- Dividing (averaging) decimals
The "Golden Ratio Face" Template
Below is a link to make your own copy of the "Golden Ratio Face" Google Docs template. Each student will need their own copy of this Doc. Students can use the link below, or you can push out a copy to each student through Google Classroom.
- "Golden Ratio Face" Template - Google Docs link
Step #1 - Insert Your Face
To do this activity each student will need a picture of a face. A fun way to engage the students can be to have them use their own face. If using a Chromebook or other device with a webcam, the student can insert their snapshot directly into the Doc as follows:
(Note: Google has updated the "Insert Image" options in Docs, and the "Take a Snapshot" option has been removed at the moment. Hopefully it will get added back in, but until that happens, please see my other blog post for an alternative: Take a Snapshot Alternative for Docs, Slides, and Drawings )
- Scroll to page 2 of the "Golden Ratio Face" Google Doc and click below the line that reads "Insert image of your face below".
- Now click "Insert" then "Image".
- In the window that opens up, choose the "Take a snapshot" tab.
- If you have not used the webcam before, you may need to allow permissions for it to work.
- Position your face so you are looking directly forward, and not at an angle, so your measurements will be more accurate.
- Click the "Take snapshot" button to take your picture.
- If you like it, click "Select" to insert it into the Doc.
Instead of using a webcam, you can also add an image from another source, such as uploading a previously saved image. Simply click "Insert" then "Image" then "Upload".
If the student does not want to use their own face, they could certainly use an image of someone else, such as a celebrity or someone from history. The key is to find an image where the person is looking straight forward so accurate measurements can be made.
Whatever method is used to insert an image, you may want to crop and resize the image after inserting it to make the face as large as possible for easy measuring.
- To crop the image, simply double-click on the image, then adjust the black crop lines on the edges. When done, click outside of the image or press the Escape key.
- To resize the image, simply select the image, then click and drag the corners as needed.
Note: Since students can be sensitive about their appearance, you may want to include some class discussion on the subjectivity of beauty. The ancient Greek's use of the Golden Ratio for beauty was just one idea at one time in history. This should be a fun activity, and not something that would make a student feel bad about themselves in any way.
Step #2 - Measure Your Face
Now that you have your face (or a face) in the Google Doc, you will need to measure 12 specific facial features. Certainly this can be done with a physical hand-held ruler if you like. However, there are also digital alternatives that can be used (and might help avoid scratching the laptop screen).
One option is to use the Chrome web app called "Edge: The Web Ruler". Here's how:
- If not already installed, use this Chrome Web Store link to install the web app: Edge The Web Ruler
- Now launch the tool from your Chrome web apps list. This will open up a virtual floating ruler on your screen
- You can move the ruler around by simply clicking and dragging it.
- You can make the ruler longer or shorter by dragging the end of the ruler.
- You can switch between pixels (px), centimeters (cm), or inches (in) for your unit of measurement. I would recommend centimeters for this activity.
- You can make the ruler more accurate by entering the diagonal measurement of your actual screen, although this is not necessary since all the measurements will be converted into unit rates eventually.
- You can click the gear icon and choose "Always on top" to keep the ruler from falling behind your document.
- You can open another copy of the ruler by clicking on the horizontal or vertical ruler icons.
- When done using the ruler, click the "x" to close it out.
- A = Top of Head to Chin
- B = Top of Head to Pupil
- C = Pupil to Nose Tip
- D = Pupil to where Lips Meet
- E = Width of Nose
- F = Outside Distance between Eyes
- G = Width of Head
- H = Hairline to Pupil
- I = Nose Tip to Chin
- J = Lips to Chin
- K = Length of Lips
- L = Nose Tip to where Lips Meet
Step #3 - Calculate the Unit Rates
Now that you have all the facial feature measurements, you will want to plug those values into the 7 ratios on page 3 of the Doc.
- Enter the values for the "A" through "L" measurements in the ratios provided.
- Convert the ratios into unit rates by dividing the top and bottom of the ratio by the denominator.
- Round the answer to the nearest hundredth and write in the results a unit rates.
As an example, here are my measurements:
Step #4 - Find the Differences from the Golden Ratio
Next you will take each of the 7 unit rates and see how far off each of them are from the Golden Ratio. Basically you are seeing how close (or far) each of your ratios are from 1.62.
- At the bottom of page 3 of the Doc, find the differences between 1.62 and your unit rates.
- In each case subtract the smaller number from the larger number to get a positive difference.
As an example here are my numbers:
Step #5 - Find the Average Difference
Now that you know how far off from the Golden Ratio each of your ratios are, we now want to see on average how close you were.
- Add up your seven differences from Step #4.
- Divide the sum by 7 to get the average difference.
- Round your answer to the nearest hundredth.
- Enter this result in the "Final Answer" section. This number shows how far off your were on average from the Golden Ratio.
As you can see below, I am not quite a Greek God:
Checking Student Work
To save time when checking your students' calculations, you can use the Google Sheets template below. Simply enter the 12 facial measurements for a student, and the spreadsheet with calculate and display all of the results the student should arrive at for the activity.
To get your own copy of the template, click the link below:
- Golden Ratio Face - Calculations Check template - Google Sheets link
The "Golden Ratio Face" activity can be a fun way to have students work with ratios, unit rates, measurements, and decimal operations, while learning a bit about the history of the Golden Ratio in art and design (even if you don't turn out to have the proportions of a Greek god or goddess). Although this activity can be done without technology, the use of Google Docs, webcams, and virtual rulers can make the process more accurate, and reduce potential frustrations.
Post by Eric Curts. Bring me to your school, organization, or conference with over 50 PD sessions to choose from. Connect with me on Twitter at twitter.com/ericcurts and on Google+ at plus.google.com/+EricCurts1 |
Written for a science honors project Informative, comprehensive
In vertebrates, kidneys are the two major organs of excretion. Excess water, toxic waste products of metabolism such as urea, uric acid, and inorganic salts are disposed of by kidneys in the form of urine. Kidneys are also largely responsible for maintaining the water balance of the body and the pH of the blood. Kidneys play important roles in other bodily functions, such as releasing the erythropoietin protein, and helping to control blood pressure.
Kidneys are paired, reddish-brown, bean-shaped structures. They are about eleven centimeters long. Kidneys are located on each side of spine, just above the waist. They are loosely held in place by a mass of fat and two layers of fibrous tissue. It is believed that the kidney first evolved in the original vertebrates where freshwater organisms needed some means of pumping water from the body. The kidney became adept at reabsorbing glucose, salts, and other materials which would have been lost if simply pumped out of the body by a simple organ.
The cut surface of the kidney reveals two distinct areas: the cortex- a dark band along the outer border, about one centimeter in thickness, and the inner medulla. The medulla is divided into 8 to 18 cone-shaped masses of tissue named renal pyramids. The apex of each pyramid, the papilla, extends into the renal pelvis, through which urine is released from the kidney tissue. The cortex arches over the bases of the pyramids (cortical arches) and extends down between each pyramid as the renal columns.
Urine passes through the body in a fairly complex way. The initial site of urine production in the body is the glomerus. The arterial blood pressure drives a filtrate of plasma containing salts, glucose, amino acids, and nitrogenous wastes such... |
The most important thing to remember when working with open pipes (what is here referred to as a pipe is any cylindrical solid, i.e. a tube, and "open" is defined as having no obstruction at either end) is that at the two ends are anti-nodes. Thus, unlike a closed pipe, the fundamental frequency and the subsequent overtones increase by one-half of a wavelength. This concept is very hard to understand without looking at the image below.
The most simple possibility is the "fundamental frequency" where there is one node in the center of the pipe. The wavelength is 1/2 lambda (λ). The next possibility is the "first overtone". Here, there are two nodes (therefore n=2) and the wavelength is 2/2, or 1λ. Then comes the second overtone, with three nodes, which is 3/2λ.
The equation for finding the length of an open pipe is
l=length of pipe n=nodes
This equation can easily be rearranged to solve for wavelength:
Let's use this equation with a practice problem: Find the fundamental frequency of an open pipe that is .5m long.
To solve this, we need to use two equations. First, solve for lambda:
= 2(.5m)/1 = 1m
Then, use substitute the 1m for lambda in the equation
= (330m/s)/1m = 330Hz, our final answer. Great job! |
The thyroid and types of thyroid cancer
The thyroid is a small gland in the front of the neck just below the voice box (larynx). It is made up of two parts, or lobes. It's one of a network of glands throughout the body that make up the endocrine system. This system is responsible for producing the body’s hormones that help to control and influence various functions.
Types of thyroid cancer
There are 4 main types of thyroid cancer:
Papillary is the most common type of thyroid cancer and is usually slow growing.
Follicular thyroid cancer is the second most common type. It's also usually slow growing.
Medullary is a rare type of thyroid cancer which sometimes runs in families.
Anaplastic is also a rare type of thyroid cancer, which is more common in older people
Young people are mostly affected by the papillary and follicular types. |
There are many innovative toys for children on the market today, but one that continues to stand the test of time for its ability to encourage the whole child’s development over many years of growth is a ball.
Babies learn about the world through sensory integration, balls are something babies can see, touch, and interact with and better yet, their parents are their favorite teammate. When selecting balls for play with baby, you might choose textured balls or try slightly deflating the ball so it is easier for baby to grip. You might look for an “O ball” that baby can squeeze and grip. These are great for babies because they are soft and safe. According to the NAEYC, caregivers should provide play objects that are “made of materials and scaled to a size that lets infants grasp, chew, and manipulate them (Developmentally Appropriate Practice in Early Childhood Programs, 3rd edition, C., and S. Bredekamp, eds. 2009).”
Roll the ball with your baby while sitting or when baby is enjoying tummy time. This type of play encourages gross-motor development as baby reaches and grasps the ball with both hands. You will help baby to build finger strength and strengthen the muscles needed for sitting. Rolling the ball also helps to encourage visual tracking and supports hand –eye coordination. The parent / child time will also help the child to learn social skills of communication as the play goes back and forth between you and baby and it’s a wonderful time to bond with baby. Try singing as you roll the ball back and forth; “I roll the ball to you, you roll the ball to me, I roll the ball to you and you roll the ball to me.”
Parents of infants 3- 6 months can try using a large exercise ball to stimulate baby. Try putting the ball against a wall and firmly holding it in place with your feet. Place a towel on the balls surface, then place baby on the ball for tummy time. You can gently bounce the ball and slightly roll it from side to side. This is great for strengthening neck muscles.
One to Two-year old’s are ready to work on their eye/hand coordination. Parents can introduce catching and throwing, however this involves a series of complicated movements and muscles to control. Toddlers may attempt to throw objects at around 18 months, but catching will wait till age 3 or 4 and resembles hugging the ball to their chest. With any new skill it takes lots of practice. Parents can offer their throwers different types of objects such as bean bags, foam balls, and beach balls. Use baskets or boxes as the target, moving some close and some farther away. Parents can be more engaging with their toddler by sitting at the child’s level and playing along.
Check out this quick reference guide on typical motor development milestones and this new app from The Learning Child, UR Parent. It is full of information for parents in the first year of your child’s life. This app is geared to the specific age of your baby. Information on child development and parenting from the University of Nebraska-Lincoln. This app also features a baby book for the busy on-the-go parents. The UR parent app is handy to keep track of your immunization records on your phone, and also allows you to record special events such as the date your baby takes their first step. With UR parent, questions you have about taking care of your child are just a fingertip away.
Ages and Stages
Remember, every child develops at their own pace. The ages and stages mentioned earlier are an approximate range in developmental milestones. Parents can support their child’s growth and development by offering time and opportunity as well as safe balls to explore this gross motor play. NAEYC also tells us that caregivers should “allow toddlers freedom to explore their movements by testing what their bodies are capable of doing (Copple, C., and S. Bredekamp, eds. 2009).” Follow the child’s lead and continue the play as long as they are interested, but do not force this type of play. Your child will indicate to you when they have lost interest and are ready for the “7th inning stretch.”
What creative ways have you tried introducing balls in your routine with children?
LYNN DEVRIES, EXTENSION EDUCATOR | THE LEARNING CHILD
Make sure to follow The Learning Child on social media for more research-based early childhood education resources! |
Angle between line and plane
Determine the angle between the line with symmetric equations x=-y, z=4 and the plane 2x-2z=5.
so the direction vector for the sym equation is.. (1,1, ?)
for the z.. I tried with 1, but then I didn't get the answer. So I tried with z=4 and I got -30 degrees as the answer, and the back of the book has 30 degrees. Is this z value in the direction value correct? I don't think it is right because the final answer is positive, not negative 30. Also, z=4 as the sym equation doesn't indicate that the direction value should have a z=4..
I am confused about this. Can someone let me know what the direction value should be?
Here is my work below:
sin(theta)=(n . d)/ (|n||d|)
theta= - 30 degrees
the final answer is positive 30 degrees like i said, but i cant seem to get this.. where is my error? thanks in advance. |
Content Preparation for Teachers - State Tests
For state tests in English language arts, mathematics, science and social studies
Teachers can prepare their students for state tests by providing instruction related to all Ohio Learning Standards for their courses and grade levels. Although Ohio has developed new tests in English language arts and mathematics, the Ohio Learning Standards remain in these and all other subject areas. Posted below are resources outlining how educators can fully integrate Ohio’s Learning Standards into their classrooms using materials for their curriculum and instruction in each content area.
Sample items and practice tests
What are the test blueprints and specifications?
Test blueprints serve as guides for test construction and provide outlines of the content and skills to be measured. For each individual test, they contain the number of test items, the number of points on the test, and how learning standards are grouped to report the test results. Note that science and social studies blueprints are part of test specification documents, which include more information about the content the tests will assess.
Blueprints, specifications and resources by content area
Last Modified: 3/1/2017 11:49:07 AM |
Name: allison williams
Date: Around 1993
How does an electrical current work?
There are many volumes of books on the subject. Electrical current(s) are
not only transmitted by wire, but also other mediums, such as air in a
device called a waveguide when dealing with UHF (Ultra High Frequency
electricity) or Microwave (even higher frequency).
The mechanics of electron (or hole) flow is and will continue to be a
subject of considerable debate. The effect of that current is what we all
know about. As I said, there are volumes and volumes of books on the
In short, electrical current can flow only when there is a potential
(commonly referred to as voltage) between two sources, and a path to
equalize the potential. This potential can be created by many sources,
including chemical energy, heat energy, and just about any force that can
create substances that are charged more or less positively or negatively. A
battery, for example, is made of two or more chemicals or elements that are
reacting to cause an electrical potential between two elements. By connect-
ing the two elements with a conductor of some type, or a "load," electrical
current will flow in an attempt to equalize the differing electrical
charges. When the chemical reaction is complete, and the charges have been
neutralized, no more electrical current can flow.
Sometimes the simplest questions can be the most difficult to answer.
Keep asking them.
Click here to return to the Engineering Archives
Update: June 2012 |
In acid reflux, the stomach’s contents are regurgitated back up the esophagus. In addition to many other symptoms, this condition can cause burning and discomfort in the chest. Some evidence suggests that several hormones in the body may play a role in acid reflux. Estrogen is one example. This important hormone is responsible for the regulation and development of the female reproductive system, including the menstrual cycle. While the exact role of estrogen in acid reflux is not known, some evidence suggests that estrogen, together with other hormones, may have an aggravating effect.
Estrogen and Acid Reflux in Pregnancy
During pregnancy, the body experiences a surge in several important hormones, including estrogen. The hormone helps a mother’s uterus maintain the pregnancy and also stimulates development in the fetus. Some evidence suggests that this hormone surge may also contribute to acid reflux. It is theorized that this is because estrogen may cause a valve in the esophagus, called the lower esophageal sphincter (LES), to relax. This relaxation allows the contents of the stomach to reverse course and travel back up the esophagus. As a result, acid reflux occurs, and the individual can experience heartburn, difficulty swallowing, coughing and nausea.
Estrogen, Hormone Replacement Therapy and Acid Reflux
In hormone replacement therapy, estrogen, along with other hormones, is given during menopause or afterward. Such treatments can help minimize symptoms like hot flashes or vaginal dryness. It also helps prevent bone loss, which can occur due to the sharp drop in estrogen after menopause. Unfortunately, the supplemental estrogen received when undergoing hormone replacement therapy, along with another hormone called progesterone, may also lead to acid reflux. Some research suggests that this may be due to relaxation of the LES, though other mechanisms are possible. According to a 2008 study in the "Journal of the American Medical Association," the risk of GERD symptoms increased with the dose and the duration of estrogen use.
Estrogen, Obesity and Acid Reflux
Obesity is associated with an increased risk of acid reflux. Being overweight is believed to contribute to reflux in a number of different ways. The effect of extra body fat on estrogen levels might be one of them. Circulating estrogen levels tend to be higher in overweight and obese females, especially after menopause. If such elevation causes a loosening of the LES, women with a higher body mass index may be at greater risk for developing acid reflux for a number of reasons. Maintaining a healthy BMI can help avoid this.
Estrogen, Birth Control and Acid Reflux
Many women use contraceptives as means of birth control. Whether taken orally, administered via a patch, injected or implanted within the body, many of these contraceptives use estrogen to limit fertility. Some researchers theorize that the hormones in these contraceptives may lead to an increase in the risk of developing acid reflux. According to the authors of a 2007 study in the "Journal of Gastroenterology and Hepatology," a relationship has been found between the use of oral contraceptives and the development of acid reflux.
Side Effects of Estrogen and Acid Reflux
The use of estrogen, whether in hormone replacement therapy or birth control, can lead to many side effects, including nausea and vomiting, and some of these symptoms are similar to those associated with acid reflux. An increase in any of these symptoms may be serious and should not automatically be attributed to heartburn. In addition, long-term use of estrogen can place a person at an increased risk for blood clots, heart disease, stroke and some forms of cancer. Thus, it is very important for people to report any symptoms to their medical provider. |
Cause and Effect Diagram
Everything has its causes. There are many options to capture a problem's causes. One effective way to sort these different ideas and stimulate the team's brainstorming on root causes is the cause and effect diagram, also known as fishbone diagram.
What Cause and Effect Diagram Is
Cause and Effect Diagrams are also known as Fishbone Diagrams, Ishikawa Diagrams, Herringbone Diagrams, and Fishikawa Diagrams. They are causal diagrams created by Kaoru Ishikawa (1968) that show the causes of a specific event. Refer to the following example for better understanding. This diagram is created by Edraw, an all-in-one diagramming tool.
When to Use Cause and Effect Diagram
The Fishbone diagram could be applied when it is wanted to:
- Focus attention on the causes of one specific issue or problem.
- Focus the team on the causes rather than the symptoms.
- Organize and demonstrate visually the various theories about what the root causes of a problem may be.
- Show the relationship of various factors contributing to a problem.
- Reveal important relationships among various variables and possible causes.
- Provide additional insight into process behaviors.
- Display the sequence of related factors.
- Present the incidence of certain elements.
Prerequisites or Limitations of Cause and Effect Diagram
- The problem is composed of a limited number of causes, which are in turn composed of sub causes.
- Distinguishing these causes and sub causes is a useful step to deal with the problem.
How to Construct Cause and Effect Diagram
When you construct a Cause-and-Effect Diagram, you are building a structured, graphic display of a list of causes organized to show their relationship to a specific result. Notice that the diagram has a cause side and an effect side. The steps for analyzing and diagramming a Cause-and-Effect process are outlined below.
Step 1 - Identify and clearly define the outcome or EFFECT to be analyzed. This is also the problem to be solved or the purpose of analysis.
Step 2 - Prepare the SPINE and EFFECT box. Traditionally, this is drawn on hand. Nowadays, there is a more advanced tool with ready-made cause and effect diagram templates. Users only need to drag and drop templates with such tool like Edraw.Step 3 - Find out the main CAUSES contributing to the object being studied. These are the labels for the major branches of your diagram and become categories under which to list the many causes related to those categories. You had better use category labels that make sense for the diagram you are making, such as 3Ms and P methods - materials, machinery, and people.
Step 4 - For each major branch, list other specific factors which may be the CAUSES of the EFFECT.
Step 5 - Identify increasingly more detailed levels of causes and continue organizing them under related causes or categories. You can do this by brainstorm. NOTE: You may need to divide your diagram into smaller diagrams if one branch has too many subbranches. Use a hyperlink to connect another diagram. Any main cause (3Ms and P, 4Ps, or a category you have named) can be regarded as an effect.
Step 6 - Analyze the diagram. Analysis helps you identify causes that warrant further investigation. By analyzing, you figure out the relationship and then find out better strategy or solution.
Learn how to make a fishbone diagram here. |
The Unbalanced Magnetron
An unbalanced magnetron possesses stronger magnets on the outside resulting in the expansion of the plasma away from the surface of the target towards the substrate. The effect of the unbalanced magnetic field is to trap fast moving secondary electrons that escape from the target surface. These electrons undergo ionizing collisions with neutral gas atoms away from the target surface and produce a greater number of ions and further electrons in the region of the substrate considerably increasing the substrate ion bombardment. Effectively a secondary plasma is formed in the region of the substrate. When a negative bias is applied to the substrate, ions from this secondary plasma are accelerated to the substrate and bombard it; this ion bombardment is used to control the many properties of the growing film. To find out more about the properties of the growing film click coating nucleation and growth.
An unbalanced magnetron, the outer North poles are stronger than the inner South poles therefore the field lines stretch further into the vacuum chamber
With the development of the unbalanced magnetron the substrate ion current that could be achieved and therefore the quality of the coatings increased dramatically but more was to come with the development of multi-magnetron geometry with magnetic field linkage.
Closed-field Magnetron Sputtering
The magnets within the magnetrons are arranged such that alternating poles are next to each other resulting in the linkage of field lines. This prevents electrons escaping to the chamber walls resulting in much higher ion current densities and dense, hard well adhered coatings.
A closed-field magnetic arrangement. The magnets form an electron trap to increase the level of ionization.
To find out exactly what a magnetron is click the link. |
- School Life
- Parent Zone
- Student Zone
- Teaching and Learning
- Examination Results
- KS4 Curriculum
- Subject areas
- GCSE Options
- The Pupil Premium
- Year 7 catch up – Maths and English
Science isn’t only about new inventions, new technology and new medicines. Science is important because it satisfies our curiosity about the world we live in. It is the best way we know of to banish ignorance with knowledge. Nearly every aspect of human life has been changed by science: health, food, and war to name a few. Without science, there would be no computers, no internet, and you certainly wouldn’t be reading this! At the Science department at Alder we believe that every student has the right to gain an understanding of the importance of science and its relevance to their lives. We regard it as a privilege to be able share our passion for science with our students. The Science department comprises of 8 teachers (including our Head teacher and an Assistant Head teacher), and 2 experienced technicians. Each member of the department is incredibly dedicated and has a real passion for science.
At Key Stage 3 emphasis is placed on allowing students to explore for themselves. Students are given the opportunity to develop their skills and knowledge through a practical based scheme of work which places emphasis on the importance of analytical and critical scientific thinking. Classes are taught in ability sets. This facilitates effective differentiation and helps us to provide each student with a programme of study which reflects their individual needs. The scheme of work is split into the following individual topics:
Year 7: “Earth and Space”, “Why are We Different?”, “Be Reactive”, “What are Things Made Of?”, “Staying Alive”, “How Things Move” and “Using Energy”.
Year 8: “Magnets”, “Systems for Survival”, “What’s in a Reaction?”, “Light and Sound”, “Changing Earth”, “How We Stay Healthy”, “Heating and Cooling” and “How Living Things Interact”.
Year 9: “Are you Fit?”, “Energy and Conservation”, “Chemical Reactions”, “Earth and Space”, “Calculating Forces”, “Upsetting the Balance” and “Environmental Chemistry”.
Each topic has associated teaching software such as video clips, interactive roleplay activities, informative presentations, scene investigation activities and web links. We appreciate just how much students enjoy practical investigations and demonstrations, and include them in our lessons regularly. Students are provided with the opportunity to develop their ICT skills by using data logging equipment, voting handsets and the departmental suite of laptops.
At Key Stage 4 we offer a number of different courses. This enables us to provide every student with an option which is appropriate to their ability and aspirations.
The majority of students will study 2 Science GCSEs. GCSE students follow the Edexcel GCSE Science syllabus. They study Core Science in Year 10 and Additional Science in Year 11. The Core Science GCSE course places an emphasis on scientific literacy – the knowledge and understanding which students need to engage, as informed citizens, with science-based issues. The qualification uses contemporary, relevant contexts of interest to students.
Topics include: “Classification, variation and inheritance”, “Responses to a changing environment”, “The Earth’s sea and atmosphere”, “Acids”, “Obtaining and using metals”, “Fuels”, “Visible light and the solar system”, “The electromagnetic spectrum” and “Waves and the Earth”.
The Additional Science GCSE course is a concept-led course developed to meet the needs of students seeking a deeper understanding of basic scientific ideas. The course focuses on scientific explanations and models, and gives students an insight into how scientists develop scientific understanding of ourselves and the world we inhabit.
Topics include: “The building blocks of cells”, “Organisms and energy”, “Common systems”, “Atomic structure and the periodic table”, “Ionic compounds and analysis”, “Chemical reactions”, “Static and current electricity”, “Motions and forces” and “Nuclear fission and nuclear fusion”.
The more able students are given the opportunity to study separate Science GCSEs. They can work towards separate Biology, Physics and Chemistry GCSE qualifications. Each provides an opportunity for further developing an understanding of science explanations, how science works and the study of elements of applied science, with particular relevance to professional scientists.
Topics covered include:
“Control systems”, “Behaviour”, “Biotechnology”, “Qualitative Analysis”, “Quantitative Analysis”, “Electrolytic processes”, “Gases, equilibria and ammonia”, “Organic chemistry”, “Radiation in treatment and medicine”, “X-rays and ECGs”, “Kinetic theory and gases” and “Motion of particles”.
The Science department is committed to raising attainment, and teachers use a number of strategies to achieve this goal. We use diverse teaching and learning techniques, offer revision booster classes for GCSE students prior to Year 10 and Year 11 terminal exams, regularly use praise, rewards and positive reinforcement and offer the following extra-curricular enrichment activities:
- Year 7 Extra-Curricular Science Club
- Year 7 and Year 8 entry into the National Salters’ Festival of Chemistry
- Year 10 nomination for the Salters’ Chemistry Camp
- Year 7 visit to Chester Zoo
- Year 7 Wild Roadshow visit
- Year 10 visit to “GCSE Science Live!” Event
- Year 8 Study Experience to Disneyland Resort Paris |
Dictionary Meaning and Definition on 'Idiom'
- a manner of speaking that is natural to native speakers of a language [syn: parlance]
- the usage or vocabulary that is characteristic of a specific group of people; "the immigrants spoke an odd dialect of English"; "he has a strong German accent" [syn: dialect, accent]
- the style of a particular artist or school or movement; "an imaginative orchestral idiom" [syn: artistic style]
- an expression whose meanings cannot be inferred from the meanings of the words that make it up [syn: idiomatic expression, phrasal idiom, set phrase, phrase]
- Idiom \Id"i*om\, n. [F. idiome, L. idioma, fr. Gr. ?, fr. ? to
make a person's own, to make proper or peculiar; prob. akin
to the reflexive pronoun ?, ?, ?, and to ?, ?, one's own, L.
suus, and to E. so.]
- The syntactical or structural form peculiar to any language; the genius or cast of a language. Idiom may be employed loosely and figuratively as a synonym of language or dialect, but in its proper sense it signifies the totality of the general rules of construction which characterize the syntax of a particular language and distinguish it from other tongues. --G. P. Marsh. By idiom is meant the use of words which is peculiar to a particular language. --J. H. Newman. He followed their language [the Latin], but did not comply with the idiom of ours. --Dryden.
- An expression conforming or appropriate to the peculiar structural form of a language; in extend use, an expression sanctioned by usage, having a sense peculiar to itself and not agreeing with the logical sense of its structural form; also, the phrase forms peculiar to a particular author. Some that with care true eloquence shall teach, And to just idioms fix our doubtful speech. --Prior. Sometimes we identify the words with the object -- though be courtesy of idiom rather than in strict propriety of language. --Coleridge. Every good writer has much idiom. --Landor. It is not by means of rules that such idioms as the following are made current: ``I can make nothing of it.'' ``He treats his subject home.'' Dryden. ``It is that within us that makes for righteousness.'' M.Arnold. --Gostwick (Eng. Gram. )
- Dialect; a variant form of a language. Syn: Dialect. Usage: Idiom, Dialect. The idioms of a language belong to
Would you like to add your own explaination to this word 'Idiom'?
Wikipedia Meaning and Definition on 'Idiom'
An "idiom" is a word or phrase which means something different from what it says - it is usually a metaphor. Idioms are common phrases or terms whose meanings are not literal, but are figurative and only known through their common uses.
Because idioms can mean something different from what the words mean it is difficult for someone not very good at speaking the language to use them properly. Some idioms are only used by some groups of people or at certain times. The idiom shape up or ship out, which is like saying improve your behavior or leave if you don't, might be said by an employer or supervisor to an employee, but not to other people.
Idioms are not the same thing as slang. Idioms are made of normal words that have a special meaning known by almost everyone. Slang is usually special words that are known only by a particular group.[See more about Idiom at Dictionary 3.0 Encyclopedia]
Words and phrases related to 'Idiom' |
The lowland tropics were once though filled with widespread species, while moderate and higher elevations were thought to contain species with more restricted distributions. That idea is turning out to be partially incorrect. Widespread species now appear to be the exception, instead of the rule. A new study describes four species once considered to be the collared treerunner, a lizard known to the scientific community as Plica plica. The study was published in the open access journal ZooKeys.
The collared treerunner was originally described in 1758 and has been the subject of many biological, ecological, and behavioral studies in recent years. A new ZooKeys paper by John C. Murphy, Field Museum (Chicago) and Michael J. Jowers, Estación Biológica de Doñana (Sevilla, Spain) describe four new species formerly thought to be one.
"The collard treerunner was considered a single species ranging from Trinidad and Tobago and northern Venezuela southward into the Amazon Basin, south of the Amazon River." Murphy said. " The Treerunners ancestor diverged about 25-30 million years ago, and throughout this time the South American continent has undergone dramatic remodeling, including the rise of the Andes, rising and falling sea levels, and changing climates that isolated populations for long periods of time, allowing them to become new species. Treerunners live on vertical surfaces, such as tree trunks, rock walls, and even buildings and they eat a variety of insects.
The new paper focuses on populations of this lizard in northern South America, but in an overall survey the authors examined specimens from across the Amazon basin and suspect there may be at least another five to seven undescribed species in what is currently considered the collared treerunner. The treerunners from Trinidad and northern Venezuela were 4.5% genetically different from those in southern Venezuela, and more than 5% different from those in Brazil. For comparison purposes humans and chimpanzees are less than 2% genetically different.
While some species may form by genetic divergence without showing any morphological differences from their ancestor, other often show subtle or obvious morphological differences that may be quite easy to detect. The latter is the case with the collard treerunners.
Some had as few as 92 scales around the body while others had 202 scales around the body. Some adult males have yellow heads while other have red heads, some have distinctive patterns of spots while others have transverse bands.
Unraveling cryptic species is important for a more complete understanding of biodiversity, evolution, and for long term conservation efforts.
The take home message here is that there are many more species of squamate reptiles (lizards and snakes) in the world than previously thought, and it is likely many species have and will disappeared before science is even aware of them. Cutting forests and draining swamps undoubtedly causes extinctions of the species depending upon those habitats. While none of the treerunners described in this paper are likely to be threatened with extinction this discovery and many other similar recent discoveries suggest our knowledge of biodiversity is lacking.
Explore further: Three new wafer trapdoor spiders from Brazil
Murphy JC, Jowers MJ (2013) Treerunners, cryptic lizards of the Plica plica group (Squamata, Sauria, Tropiduridae) of northern South America. ZooKeys 355: 49. DOI: 10.3897/zookeys.355.5868 |
Presented by Burnaby Mountain PAC – The Mindful Teen: Promoting Mindfulness and Social-Emotional Learning.
Mindfulness means “Paying attention in a particular way: On purpose, in the present moment, and nonjudgmentally” (Kabat-Zinn). The field of mindfulness-based interventions for adolescents is currently exploding. Emerging mindfulness-based interventions for youth are showing significant promise in helping adolescents to cope with adversity, and promote resilience and positive youth development. Within education, mindfulness can be a key component of Social-Emotional Learning (SEL). SEL helps youth “acquire and effectively apply the knowledge, attitudes, and skills necessary to understand and manage emotions, set and achieve positive goals, feel and show empathy for others, establish and maintain positive relationships, and make responsible decisions” (CASEL.org). In this interactive presentation, Dr. Vo will discuss science and practice of mindfulness-based interventions with adolescents; share practical mindfulness exercises that educators can use in their schools and personal self-care strategies; and share mindfulness resources for youth, families, and professionals. |
The Cochlear implant is an extraordinary technology that has significantly improved the lives of tens of thousands of children and adults with severe to profound hearing loss who do not derive enough benefit from hearing aids. While results may vary, the majority of people who receive cochlear implants are able to hear well enough to use hearing to understand speech and to talk on the telephone. Most young children hear well enough to learn spoken language and be successfully mainstreamed for school and play.
What is a cochlear implant, how does it work?
When a patient is told that he/she has a “nerve†hearing loss, in truth, it is almost never the nerve that is not working. The problem almost always lies in the microscopic hair cells that convert mechanical sound to electrical energy within the organ of hearing (see "How We Hear") A cochlear implant bypasses the sick hair cells and directly, electrically, stimulates the healthy nerve endings under the hair cells.
A cochlear implant has two basic parts: an external part that is similar to a hearing aid, and an internal part called the receiver-stimulator that is surgically implanted in the bone behind the ear. The receiver stimulator has an antenna imbedded in it, with a magnet in the center of the antenna. An electrode array is connected to the front of the receiver stimulator and this electrode array is surgically implanted in the cochlea (organ of hearing). It is this electrode array that sends the electrical signal that bypasses the sick hair cells.
The external part of the cochlear implant, like a hearing aid, has a microphone to pick up sound. The sound goes into a special speech processor where a digitial signal is created. The digital signal travels through a cable to an external antenna, which has a magnet in its center. The external magnet aligns with the implanted magnet and allows the external antenna to sit directly over the internal antenna. Thus, the digital signal from the outer antenna goes through the intact skin to the inner antenna. From the inner antenna the signal goes to the speech processor which determines which electrodes in the cochlea should be stimulated (see illustration).
Who is a candidate for a cochlear implant?
There are two parts to the cochlear implant evaluation, medical and audiologic. The cochlear implant surgeon must make sure that there is no infection, tumor or other abnormality that would prevent successful implantation and use. This evaluation usually includes an xray of the inner ear. That may be a computerized tomogram (CT scan) or magnetic resonance imaging (MRI)
The second part of the evaluation is performed by the nonphysician members of the cochlear implant team and they include:
- Hearing tests: Every patient must see the audiologist and have a hearing evaluation to confirm the degree of hearing loss. Patients will be tested with their hearing aids. If the audiologist believes that the hearing aids are not performing as well as possible, of if they believe that there may be better hearing aids for the patient, the audiologist will try different hearing aids. Once it is determined that hearing aids are not providing sufficient benefit, cochlear implants are discussed.
- Speech-Language Pathologists: All children will be evaluated by speech language pathologists as part of the cochlear implant evaluation process. This evaluation helps understand the child's skills and plan for habilitation after implantation. The speech-language evaluation assesses all aspects of a child's ability to comprehend and to formulate verbal communication.
- Educational consultation: All children will receive an educational consultation as part of the implant evaluation process. This consultation is intended to assist families in obtaining the best possible services for their children. School placement, early intervention, and individual therapy will be discussed. Families will be assisted in determining which programs are best for their individual child.
- Social Worker: Families of children considering implantation will meet with a social worker to help answer questions and discuss concerns.
- Team meeting: After the evaluations are complete, the team members often meet to discuss the patient's test results. Each patient's needs and the appropriate way to address those needs is discussed.
- Device selection: Once it has been determined that a patient is a candidate for a cochlear implant, a device demonstration will be scheduled. There are multiple companies that manufacture cochlear implants and the device demonstration helps the patient and family decide which device best meets their nees.
What is involved with cochlear implant surgery?
Cochlear implant surgery is performed in the hospital, under a general anesthetic. Before surgery, every patient must obtain a medical letter of fitness for general anesthesia, from their pediatrician or family practitioner. The cochlear implant surgery takes about two to three hours. Very little hair is shaved behind the ear as the incision is small, usually just in the crease behind the ear. After surgery the patient is in the recovery room (PACU - post anesthesia care unit) for about one hour. Patients then return to the ambulatory care area and most patients are discharged on the day of the surgery.
The risks of cochlear implant surgery include:
- The risk of general anesthesia: Anesthesia risk is based upon each patient's medical history. In general, with cochlear implant surgery, the risk of anesthesia is very low. Special risks will be discussed by the anesthesiologist before surgery.
- Lack of guarantee as to hearing success with the cochlear implant: Although our expectation is that the cochlear implant will restore hearing, no guarantee can be offered as to how much hearing will be restored or how happy any given patient will be with the results.
- Risk of injury to the facial nerve: The facial nerve moves the muscles on the side of your face that controls smiling, closing the eye and wrinkling the forehead. If this nerve is injured you can have weakness or paralysis of the muscles of your face on the side of the surgery. This is a rare complication of cochlear implant surgery and is minimized by using a computer, during the surgery, to monitor the facial nerve.
- Strange tastes in the mouth: The nerve that helps provide taste to the tongue goes through the ear and is often cut during various ear surgeries and during cochlear implant surgery. Despite the fact that the nerve is cut, it usually causes no symptoms. Occasionally, there is a strange taste for several weeks. Rarely, the strange taste persists.
- Wound infection: As with any surgery, there is a small risk of postoperative infection. This may necessitate treatment with antibiotics.
- A lump behind the ear: There is the possibility that you will feel the receiver stimulator under the skin behind your ear after your cochlear implant surgery.
- Numbness around the ear: The area around your implant ear may feel numb or stiff. The usually resolves withing three months of surgery.
- Cerebrospinal fluid leakage: Rarely, a leak of cerebrospinal fluid (CSF) can occur from the inner ear. If this occurs, additional surgical treatment may be necessary.
- Tinnitus: You may develop tinnitus (ringing in the ear) after surgery.
- Dizziness: You may have some dizziness after the surgery, which is usually mild and transient.
Meningitis: There is a small increase in the incidence of meningitis in children and adults with cochlear implants. Meningitis is a bacterial infection of the membrane surrounding the brain, which can be a very serious, potentially life threatening complication. There are vaccinations available to minimize the chance of developing meningitis. The Centers for Disease Control (CDC) of the Federal Department of Health and Human Services and the American Academy of Otolaryngology - Head and Neck Surgery strongly recommend that all patients about to receive or who have received a cochlear implant be vaccinated for the bacteria that causes meningitis. There are several vaccines available, which can be administered by your primary care physician, otolaryngologist or pediatrician, one for young children and another for older children and adults. The suggested vaccines are at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5909a2.htm
What is postoperative programming (mapping)?
Patients are seen by the audiologist about three to four weeks after surgery to turn on or activate the external component of the cochlear implant. The initial visit lasts about two hours. Further appointments are scheduled as needed. The goal of programming (mapping) the speech processor is to customize the cochlear implant for each individual patient's hearing needs so that the patient hears as well as possible. |
Digital cameras have become common devices found on such electronic products as smartphones and tablets. With a push of a button, we're able to capture memorable moments, sporting events, and awesome maker projects to share with family and friends.
In the past, to build a basic digital camera required an extensive amount of photography knowledge and electronics. Today, building a digital camera is easy as a Raspberry Pi, a camera board, and a small amount of Python code.
In this project, you will learn how to build a Raspberry Pi camera (Pi Camera). The components to build the Pi Camera are provided in the Parts Lists. The Pi Camera basic components, provided in a block diagram, are shown in Figure 1.
Figure 1: The Pi Camera block diagram
- Raspberry Pi (B, B+, Pi2 or the Pi 3)
- Raspberry Pi Camera Board v2 - 8 megapixels
- Adjustable Pi Camera Mount
- Toy telescope kit
- Third Hand
Before we discuss the construction details of the Pi Camera, let's review digital camera basics.
Digital Camera Basics
The technology behind the digital camera is quite interesting because of a small integrated sensor circuit called an image detector. Unlike an ordinary 35mm camera, digital cameras have no photographic film for storing the object's image. The typical digital camera can capture a target object and record it as an image using digital technology.
The small integrated sensor known as a Charged Coupled Device or CCD captures an object's reflected light rays and converts them into numerous pixels or picture elements. The pixels are converted into binary data and stored inside of the camera's memory. Figure 2 summarizes the conversion process:
Figure 2: The image detector object-to-picture conversion process
Today's digital camera image detectors have been improved by using CMOS (Complementary Metal Oxide Semiconductor) image sensors.
The advantage a CMOS image sensor has over CCD technology is their low power consumption during the object-to-picture conversion process. CCD's generally require more power than CMOS image sensors. For example, a 5 megapixel CMOS image sensor could typically consume 200 to 400mW. With most handheld electronics using batteries, this reduction in power is very important to the consumer. Another advantage of CMOS is the low learning curve to using it. CCDs require additional circuit support from external components like ADCs and analog ICs. Also, complex timing signals to provide proper operational sequences for pixel to picture conversion are required as well. A CMOS sensor can integrate so much signal-processing circuitry that the chip can simply output digital image data. Now that we have a basic understanding of digital cameras, CCDs, and CMOS image sensors, let's configure our Raspberry Pi for camera operating mode.
Pi Camera Configuration
For the Raspberry Pi to be able to work as a camera, its imaging feature needs to be enabled. Luckily, the Raspberry Pi has a variety of features and functions that can be enabled using the computer's configuration tool.
To obtain the Raspberry Pi's configuration tool, open an LXTerminal window session and type the following Linux command after the "$" prompt:
Figure 3 shows an LXTerminal window session with the Linux sudo raspi-config command displayed on the screen:
Figure 3: Opening the Raspberry Pi configuration tool using the sudo raspi-config linux command
After typing the configuration tool command and hitting the enter key, a new window will appear on the monitor's screen as shown next. Using the down arrow on your keyboard, select the "Enable Camera" option:
Figure 4: Turning on the Raspberry Pi camera feature inside the configuration tool
With the selection made, the configuration tool will ask if you want the option to enable:
Figure 5: Enabling the Raspberry Pi Camera inside the configuration tool
The final step in saving the camera feature is to reboot the Raspberry Pi:
Figure 6: Saving the camera feature by rebooting the Raspberry Pi
Your Raspberry Pi camera feature is ready. Congratulations! You can now build your Pi Camera.
The following section will provide the construction notes for building and testing your Pi Camera.
Building the Pi Camera
Now that the camera feature is enabled on your Raspberry Pi, let's attach the camera module to it. Basically, the camera module is easy to attach to the Raspberry Pi.
On the Pi, there are two connectors: one for an LCD and the other the camera module. You will insert the camera module's flat ribbon cable into the the tiny connector labeled CSI (CAMERA) as shown in the next figure.
Figure 7: The camera's flat ribbon cable inserted into the onboard connector
With the camera module attached to the Raspberry Pi, you can test the device electrical connection. Type the following Linux command to take a simple picture within the LXTerminal window.
raspistill -o image.jpg
The picture named "image.jpg" will be stored in your Raspberry Pi's home/pi directory. Here's an example of a picture that was taken with my Raspberry Pi.
Figure 8: A slayer exciter circuit picture captured with my Raspberry Pi
If you have an image stored within your Raspberry Pi's home/pi directory, great! If not, check the ribbon cable connection and repeat the test.
To enhance on the mounting feature for the Pi Camera, the camera can be attached to a mount. The Adafruit Pi Camera mount is used to support the imaging sensor as shown in Figure 9:
Figure 9: The camera is attached to the mount using four fasteners
Four fasteners are used to attach the camera to the mount. The adjustment of the camera's angle can be changed by repositioning the stand to the appropriate slots.
The next phase of the build requires attaching the camera module and mount assembly to a tripod or a supporting stand structure.
Since a tripod was not on hand, I decided to use my Third Hand. I carefully, attached the camera mount assembly to the Third Hand using one of the alligator clips. Next, I placed the Third Hand on top of the Raspberry Pi to give the camera proper viewing angle height.
Figure 10: The author's homebrew tripod built using Third Hand
Using the raspistill Linux command, I was able to take a picture of my PIC microcontroller-SNAP LED flasher project.
Figure 11: PIC microcontroller -SNAP circuit LED flasher project captured by the author's Pi Camera
To improve the appearance of the Pi Camera, I decided to insert the imaging module inside of a toy telescope kit (because I had a Poly-Optics kit made by Galt Toys laying around in my lab). Again, we are simply using the telescope as a convenient housing for the camera. If you actually want optical magnification, you need to have the camera properly positioned relative to the telescope's focal length.
The concept of this new Pi Camera design is shown next.
Figure 12: A Pi Camera version 2: The camera module inserted inside of a toy telescope
Using the concept drawing in Figure 12, you can insert the camera module inside of the toy telescope. Place the module as close to the lens as possible. For reference, the following image shows the assembly of my camera module inside of the toy telescope.
Figure 13: The author's camera module inserted inside of the toy telescope. Notice the location and orientation of the module to the toy telescope's lens.
With the camera module inserted inside of the toy telescope, a tripod, Third Hand, or a Panavise can be used to mount the imaging device to the proper viewing height. You may need to make minor adjustments to your mounting assembly to assure that you're taking best picture possible with Pi Camera.
Figure 14: The author's new Pi Camera built using a toy telescope. The Panavise helps to support the new camera.
Next, the Python code.
The Pi Camera Python Code
Although the Pi Camera can take effective pictures using the raspistill command, a small camera script can be created using Python. Before the camera script can be coded, the picamera library needs to be installed on the Raspberry Pi. The first step in adding the picamera library is to get a Raspbian update using the following Linux command of:
sudo apt-get update
Figure 15: Updating the Raspbian operating system
Once, the update is completed, you may install the Raspberry Pi picamera library using the Linux command of:
sudo apt-get install python-picamera python3-picamera
Figure 16: Installing the picamera library
After several minutes, the installation process is completed.
Figure 17: The picamera library installed
With the picamera library installed onto your Raspberry Pi, the camera script can be written using Python.
As shown in Listing 1, the Pi Camera code is quite simple and short in scripting length.
Open the LXTerminal editor by typing the Linux command after "$" prompt:sudo nano. Type the code shown in Listing 1 and save the file as sim_camera.py. The file can also be downloaded to your notebook computer or desktop PC by clicking the code button shown below as well.
Place an object in front of your Pi Camera and type the Linux command sudo python sim_camera.py into the LXTerminal window. You should see the object's image on the monitor's screen briefly. Look into your Raspberry Pi's home/pi directory for the picture under the "foo.jpg" filename.
The picture I've taken with my Pi Camera is shown in Figure 18:
Figure 18: The author's PIC microcontroller-SNAP LED flasher circuit taken with the Pi Camera
To show the Pi Camera in action, I provided a small video clip below:
You now have a working Raspberry Pi camera. Good work! Next time, we'll build a BrickPi robot using a Raspberry Pi!
Listing 1. PiCamera Python Code #*************************************** #* PiCamera * #* Don Wilcher * #* July 1, 2016 * #* * #* PiCamera * #* will take a picture and store it in * #* your Python Home Directory file. * #* * #*************************************** # Include Python libraries from time import sleep from picamera import PiCamera # Setup of Camera attributes camera = PiCamera() camera.resolution = (1024, 768) camera.start_preview() #Camera warm-up time sleep(2) # Capture image named 'foo.jpg' camera.capture('foo.jpg') # Picture has been taken print('picture taken') |
Scientists reveal link between Sahara and Amazon
Dust to, er, rainforest, actually
Scientists have conclusively demonstrated the extent of the link between the Sahara desert and the Amazon rainforests.
It might sound unlikely, but their work has shown that the Amazon rainforest depends on dust from one tiny area of the Sahara desert to restock its soil with nutrients and minerals. Analysis of images from NASA's MODIS satellite have revealed the Bodélé, a region of the Sahara not far from Lake Chad, as the source of more than half the material that fertilises the rainforest.
The Bodélé depression was already known as one of the largest sources of dust in the world, but the scientists involved in the research say no one had any idea of the scale of the region's importance to the Amazon. It transpires that if the Bodélé was not there, the Amazon would be a mere wet desert.
Dr Ilan Koren, lead author of the paper said: "Until now no one had any idea how much dust [The Bodélé] emits and what portion arrives in the Amazon. Using satellite data, we have calculated that it provides on average more than 0.7 million tons of dust on each day that it is actively emitting dust."
The dust is swept into the atmosphere by the surface winds in the Sahara. The Bodélé region loses most of its dust during the spring and winter months, unlike the rest of the Sahara, because of its unique geography.
The Bodélé depression is located downwind (in the winter, when the Harmattan winds blow) of a huge crater-like valley between the Tibesti and Ennedi mountains. This crater narrows to a cone-shaped pass which focuses the winds, and they speed up towards the Bodélé. This is how the region, which is just half a per cent of the size of the Amazon, can produce as much dust as it does.
Dr Koren explains the process: "In the early morning on an emission day the winds speed up to the critical velocity for lifting and transporting dust when they reach the Bodélé.
"By using data from two satellites that take images of the same areas three hours apart, we can estimate the wind speed and calculate the size of the 'dust parcels' that are produced at the Bodélé. We are then able to track the progress of the parcel the next day after it has left the Bodélé and watch it progress across the desert."
The research team use the MODIS satellite to watch the dust, and the MISR instrument, which only covers a small area, to find out more about the quantity of dust in each parcel.
The work has prompted more questions, however. The team wants to know how long the Bodélé depression has been 'sending' dust to the Amazon, and how long it will continue to do so.
The research is reported in the first edition of the Institute of Physics open-access journal, Environmental Research Letters. ® |
The two spacecraft on NASA's Gravity Recovery and Interior Laboratory… (NASA / JPL-Caltech / LMSS )
It has been described as a cosmic ballet — two spaceships in a delicate, silent dance 230,000 miles from Earth, correcting their course in tandem with air thrusts softer than a human breath, their instruments so fine-tuned they can detect a shift in gravity that pulls them no farther than the width of a strand of hair.
As soon as Thursday, NASA expects to launch its Gravity Recovery and Interior Laboratory, or GRAIL, from Cape Canaveral, Fla. Shortly after launch, two spacecraft will peel away from NASA's rocket for a meandering journey to the moon. GRAIL-A and GRAIL-B are scheduled to arrive on New Year's Eve and New Year's Day, respectively, then spend three months making 12 polar orbits of the moon each day.
The result will be a comprehensive map of the moon's gravitational field, data that will help scientists calculate the composition of its crust, mantle and core — adding to their understanding of the evolution of the rocky planets, including our own.
It's the method used to collect that data, however, that could mark the onset of a revolution in space, even in the search for life across the galaxy. GRAIL will mark the first time that scientists use a technique known as "precision formation flying" — studying the same object using multiple, coordinated spacecraft that can speak directly to one another, at times bypassing scientists altogether — beyond Earth's orbit.
The technology was long viewed as science fiction, even among those who spend their days dreaming of theoretical advances in space exploration.
But standing on the shoulders of GRAIL, scientists now envision a day when they are freed of the practical constraints of having to stuff every bit of machinery aboard a single rocket. Multiple spaceships could take off separately, then join forces to create unified technology "platforms" that enable scientists to study space in once-unthinkable depth and detail.
"People think of it as the Blue Angels. This is more like blue whales — very big spacecraft moving very slowly and deterministically and settling into a precision formation that they will maintain for years," said Daniel P. Scharf, a senior engineer at the Jet Propulsion Laboratory in La Cañada Flintridge, which is managing the $496-million GRAIL mission. "It's not zooming around — we're crystallizing the formation."
For instance, 30 synchronized spacecraft equipped with telescopes could fly together in a formation as wide as the distance from Honolulu to Houston — creating, effectively, a single, massive telescope that could peer into unexplored pockets of space.
In recent years there has been an explosion of planet discoveries in the Milky Way, including dozens that are considered Earth-like "candidates" — close enough to their star, as we are to the sun, to harbor life, at least in theory.
"Now you can take pictures of Earth-like planets to the point where you can see continents, weather systems, maybe even a forest," Scharf said. "We can look for bioindicators: methane, ozone, water in the atmosphere of these planets."
The same technology, give or take, could be used in missions that would represent significant leaps — to map the currents in the underground oceans of one of Jupiter's moons, for example, or to chart the changes in snowfall on Mars with the turn of its seasons.
One recent academic paper said the technology could yield so much progress that it would be akin to recording a football game with video cameras rather than still photographs.
After they learn more about teaching spacecraft to work together autonomously, scientists even hope to send up "swarms" of iPod-size spaceships, flying in formation. That technology is expected to yield contributions in communication — they could one day replace satellites — and to measure complex radar and radio frequencies.
And if a meteor or a piece of space debris were to crash through the swarm? No problem — the spacecraft would be so small and so cheap that the swarm would be flanked by idle replacements, which would then be told by the other spacecraft where to go to fill in the gaps.
The technology is critical to the GRAIL mission because of an unusual quirk in our corner of the solar system: The moon spins at the same rate it orbits Earth.
The moon is "synchronously locked," which means in lay terms "that we only see one side of the moon," said JPL's Sami Asmar, GRAIL's deputy project scientist. That's why there are significant unanswered questions about the moon — half of it, anyway — even though 12 humans have walked on it and many spacecraft have visited, including three that are in orbit today. |
Major differences between hurricanes and tornadoes are their formation method, location, appearance, wind speed and method of inflicting damage. Major similarities are that they form during storm conditions and are very powerful and destructive, watches and warnings are issued through weather services and they both have a set season every year.Continue Reading
Hurricanes form over warm water in the ocean and are fueled by the tropics' warm, moist air. Tornadoes form above land through cool polar air masses meeting warm air masses. Hurricanes appear as very large, rain-pouring wind storms that revolve around a central “eye.” Tornadoes appear as rapidly rotating columns of wind that have made contact with the ground.
A high wind speed for a hurricane is 160 miles per hour, while a high wind speed for a tornado is 300 miles per hour. Hurricanes cause damage with strong winds and torrential rains, while tornadoes inflict damage with extremely strong winds. Hurricanes move slowly and cause damage over an enormous area. Tornadoes move more quickly, often erratically, and cause concentrated damage in smaller areas than hurricanes.
Both hurricanes and tornadoes form during storm conditions that involve warm air. Both hurricanes and tornadoes have storm watches and warnings issued when the conditions are likely to produce or have produced a storm, but because of the unpredictable nature of tornadoes, there is much less warning time for people to find shelter.Learn more about Storms |
Jake Slobe | February 1, 2017
Mountain regions of the world are under direct threat from human-induced climate change which could radically alter their fragile habitats, warn an international team of researchers.
The international study, which spanned seven major mountain regions of the world, revealed that decreasing elevation – descending a mountainside to warmer levels –consistently increased the availability of nitrogen from the soil for plant growth, meaning that future climate warming could disrupt the way that fragile mountain ecosystems function.
The researchers also found that the balance of nitrogen to phosphorus availability in plant leaves was very similar across the seven regions at high elevations, but diverged greatly across regions at lower elevations. This means that as temperatures become warmer with climate change, the crucial balance between these nutrients that sustain plant growth could be radically altered in higher mountain areas.
They also found that increasing temperature and its consequences for plant nutrition were linked to other changes in the soil, including amounts of organic matter and the make-up of the soil microbial community.
These changes were partly independent of any effect of the alpine tree line, meaning that effects of warming on ecosystem properties will occur irrespective of whatever shifts occur in the migration of trees up-slope due to higher temperatures.
Rather than use short-term experiments, the research team used gradients of elevation in each mountain region spanning both above and below the alpine tree line. |
How to Use
Reading 1: Flour Milling
Flour mills began to be established in the town of Minneapolis in the mid-1850s. Powered by St. Anthony Falls, the mills were supplied with rapidly increasing crops of wheat grown by new settlers in western and southern Minnesota and the Dakotas. Railroads began linking Minneapolis to the west in the late 1860s. The number of Minneapolis flour mills grew rapidly.
A few things kept the Minneapolis mills from competing successfully with flour mills in other parts of the country, however. When flour was made from the hard spring wheat of the Northern Plains using conventional milling techniques, it was discolored and speckled with particles of husk or bran, and it did not keep well. In addition, conventional mill stones destroyed much of the most nutritious part of the wheat kernel.
In the 1860s and 1870s, the millers solved these problems. They developed a process that made it possible to separate the nourishing "middlings" layer of the wheat kernel, process it, and return it to the flour. A second innovation replaced conventional millstones with large chilled porcelain or iron rollers that ran at a lower speed. This prevented discoloration due to heat and minimized the crushed husk and bran that speckled the flour. By the late 1870s Minneapolis flour was recognized as the best in the nation, and it quickly replaced winter-wheat flour in both national and international markets. The mills located on the west bank of the Mississippi made that area the nation's leading flour center.
The Pillsbury Company completed its gigantic A Mill on the east side of the river in 1880. Containing two identical units, it had a capacity of 4000 barrels of flour a day when it opened. By 1905 the mill had tripled its output. Its owners claimed that it was the largest flour mill in the world. Over the years numerous buildings were added to the complex, including a grain storage elevator built in 1910 and linked to the mill by conveyors, another elevator and annex built between 1914 and 1916, and a cleaning house and nine-story warehouse built in 1917. The Pillsbury A Mill is the only mill still operating in the St. Anthony Falls milling district.
Tremendous consolidation took place within the flour industry between 1880 and 1900, as numerous mergers occurred. In 1876, 17 firms operated 20 mills; in 1890, four large corporations produced almost all of the flour made in Minnesota. By the early 1900s, three corporations based in Minneapolis controlled 97 percent of the nation's flour production. They were Washburn-Crosby Company, which became General Mills; Pillsbury-Washburn Flour Mills Company, which became Pillsbury Flour Mills Company; and Northwestern Consolidated Milling Company, which became the Standard Milling Company. This Minneapolis "Flour Trust" dominated the national flour market until the 1930s.
As consolidation took place, the number of operating mills stabilized at about two dozen, but auxiliary buildings multiplied rapidly. Warehouses, grain elevators, boiler rooms, engine houses, packing facilities, and railroad tracks crowded the land along the river. Over the years, the labor of many men constructed canals, mills, and support buildings. Others unloaded newly arrived grain onto conveyor belts that carried it to the millers, who put it through the rollers and processed it into the final product. Still other workers packed the flour, first into barrels and later into bags. Under brand names like Gold Medal and Pillsbury's Best, the newly packaged flour found its way to markets all over the world.
Questions for Reading 1
1. What initially kept the Minneapolis mills from competing sucessfully? How did the millers solve this problem?
2. How did the Pillsbury A Mill set the standard for the flour milling industry?
3. Why do you think Minneapolis came to dominate flour milling in the United States?
4. What three companies ended up with a near monopoly of the American flour industry? Have you ever seen or used any of their brands of flour?
Reading 1 was compiled and adapted from Jeffrey Hess and Camille Kudzia, "St. Anthony Falls Waterpower Area; St. Anthony Falls Historic District" (Hennepin County, Minnesota) National Register of Historic Places Registration Form, Washington, D.C.: U.S. Department of the Interior, National Park Service, 1991; and Lucile M. Kane, The Falls of St. Anthony: The Waterfall that Built Minneapolis (St. Paul, MN: Minnesota Historical Society, 1966). |
Mechanics is a vast and difficult subject. It is virtually impossible to provide a thorough introduction in a couple of sections. Here, the purpose instead is to overview some of the main concepts and to provide some models that may be used with the planning algorithms in Chapter 14. The presentation in this section and in Section 13.4 should hopefully stimulate some further studies in mechanics (see the suggested literature at the end of the chapter). On the other hand, if you are only interested in using the differential models, then you can safely skip their derivations. Just keep in mind that all differential models produced in this section end with the form , which is ready to use in planning algorithms.
There are two important points to keep in mind while studying mechanics:
Several formulations of mechanics arrive at the same differential constraints, but from different mathematical reasoning. The remainder of this chapter overviews three schools of thought, each of which is more elegant and modern than the one before. The easiest to understand is Newton-Euler mechanics, which follows from Newton's famous laws of physics and is covered in this section. Lagrangian mechanics is covered in Section 13.4.1 and arrives at the differential constraints using very general principles of optimization on a space of functions (i.e., calculus of variations). Hamiltonian mechanics, covered in Section 13.4.4, defines a higher dimensional state space on which the differential constraints can once again be obtained by optimization. |
Expanding cement, also known as expansive cement, is a relative of portland cement that contains materials that increase in volume as they set. It's usually used with concrete mixes for situations where the shrinkage of conventional cement is undesirable or where the cement or concrete mix needs to create pressure on another part of a structure. Some products made with expanding cement can be used to seal small cracks and holes in concrete walls, such as foundation walls, and can help to prevent leaks in some applications.
Materials and Production
Expanding cement is made by using a portland cement base composed of kilned limestone, clay and gypsum. The limestone and clay are heated together to a temperature of around 2,600 degrees Fahrenheit, which transforms the material into dry pieces of cement. This material is ground with sulfoaluminate clinkers. These clinkers are made by kilning limestone, calcium sulphate and bauxite together at a temperature of about 2,300 degrees Fahrenheit. When exposed to water, sulfoaluminate expands in volume.
History and Development
According to the Encyclopedia Britannica, expanding cements were first invented in France during the mid-1940s. This cement included not just sulfoaluminate and portland cement but also blast furnace slag, which was added as a stabilizing agent. The result was the first successful expanding cement that worked reliably and remained stable over a long period. Another type of expanding cement, developed in Russia around the same time, uses portland cement, gypsum and alumina cement. Expansive cement ingredients have remained roughly the same since the product's development, though manufacturers have since improved the cement's predictability, strength and working time.
Uses and Benefits
Most portland cements shrink as they dry because the water used to activate them increases their volume. In some applications, this shrinkage can reduce the strength of the cement's bond to nearby objects and structures or create leaks. Expansive cements allow contractors to create large, continuous floor slabs without joints, and they work well to fill holes in foundations and to create self-stressed concrete that is stronger than conventional portland cement concrete. Prestressed concrete components for bridges and buildings are made using this material.
Misconceptions and Terminology
Expanding cement is sometimes referred to as hydraulic cement. However, any cement that reacts with water, including portland cement, is technically a hydraulic cement. All hydraulic cements, including expanding types, will cure under water, but some shrink considerably during the process. Choose an expanding-cement product that's listed as “expanding,” “sulfoaluminate” or “nonshrinking” to avoid this problem.
- Ask the Builder; Hydraulic Cement; Tim Carter
- “Preparation of Expanding Oil-Well Cements”; F. Agzamov et al
- Encyclopaedia Britannica; Expanding and Nonshrinking Cements
- Concrete Construction: Expansive Cement Concrete
- Purdue University; Expansive (Self-Stressing) Cements in Reinforced Concrete: Interim Report; Hanume Gowda; 1980
- Photo Credit Ingram Publishing/Ingram Publishing/Getty Images
Concrete Block Wall Problems
Concrete block walls are some of the strongest walls available to the construction industry. However, they may experience many problems over their...
How to Build a Cement Patio
Cement patios are a great way to expand your outdoor living space without a lot of expense. A brand new patio can...
How to Repair a Crack in Concrete Foundations Using Hydraulic Cement
Concrete foundations often become cracked because of settlement and movement in the dirt below the foundation. Sometimes these cracks may be minor... |
Modern wind farms consist of an array of wind turbines each with a typical capacity of 1 to 8 MW. Each turbine consists of foundations, tower, nacelle, hub and rotor, drive train (gearbox and generator), electronics and controls. Such wind farms:
- are dependent on the wind which may not be blowing. Under these circumstances the load must be taken up by other power plant
- may be located onshore or offshore. Offshore locations involve expensive platforms and undersea cables but usually benefit from higher average wind speeds
- need to be located at sites where the average wind speed is high. Generally speaking, wind speeds are highest on hills and ridges and lowest in sheltered terrain. The order is typically as follows: hills and ridges > open sea > sea coast > open terrain > sheltered terrain
- produce low capacity factors
- experience seasonal fluctuations in water flow which affects the cash flows
- are maintained twice a year
- performance degrades slowly over time
- undergo major maintenance every 20 years when the sails and machinery are replaced. The performance returns to that of a new turbine
- are subject to straightforward taxation calculations but may receive subsidies.
How Promoter handles Wind Power Projects
Promoter assumes others have carried out the design and optimization of the layout and choice of turbine type.
The user chooses the calculation basis from one of the following options:
a) Promoter generated figures (in the early planing phases)
The user inserts the following information:
For the site, the altitude and temperature, the wind characteristics, in particular the average annual wind speed, the wind shear factor and the Weibull distribution density shape factor.
For the chosen turbine, the turbine characteristics, in particular, the power speed curve
Promoter first adjusts the average wind speed for wind shear and the wind farm height and temperature. It then calculates the capacity factor from the turbine manufacturer and the wind speed distribution from the selected Weibull formula. For each element of the wind histogram, it multiplies the percent occurrence by the corresponding element on the turbine power curve. It adds these figures together to get the mean annual power production.
b) 3rd Party figures (once these are available)
The user inserts the calculated capacity factor supplied by the 3rd party
If required, it will repeat these calculations for each month of the year to produce a monthly power production figure.
If required, it will incorporate a cyclical element into the long term mean wind speeds at the chosen site.
Promoter will also calculate the mean annual production for different mean wind speeds between 6 and 14 m/sec.
Although it is an unusual requirement, the user can add additional wind turbine types and/or wind distributions to a single project.
The power efficiency curve comes from the turbine manufacturer. The capacity factor is calculated from this curve and the histogram of wind speeds. The two are displayed in the chart below.
The average wind speed at many locations displays seasonal variations. A cash flow model should take these into account.
Promoter takes into account seasonal variations and these can be clearly seen when displaying charts and reports on a quarterly basis.
The following chart displays the mean annual production for different mean wind speeds between 5 and 13 m/sec
Typical Project Cash Flows
The following diagram illustrates the cash flows on a wind farm project.
The project has a high capital cost but very low operating costs. The chart also illustrates the gradual decline in efficiency and the need for sail and generator replacement after 20 years. |
Do asteroids hit the Sun like they hit the planets and moons?
No asteroids have ever been observed to hit the Sun, but that doesn't mean that they don't! Asteroids are normally content to stay in the asteroid belt between Mars and Jupiter, but occasionally something nudges them out of their original orbits, and they come careening into the inner solar system. The "something" that changes the asteroid orbits is often thought to be the Yarkovsky effect (illustrated here). It is known that Jupiter has a strong effect on the asteroid belt. Jupiter's gravity interacts with the Belt to form the Kirkwood gaps. Orbits within in a Kirkwood gap are not stable, and any asteroid whose orbit wanders into such a region will eventually get pulled into a different orbit, which may take it into the inner solar system. Therefore the Kirkwood gaps have almost no asteroids. In addition to Jupiter's influence, occasional random impacts within the belt probably send asteroid pieces flying in toward the inner solar system.
Once they are on their way in toward the Sun, you might think that they should be guaranteed to hit the Sun, but that's not the case! It is actually difficult for something that is orbiting to fall all the way into the Sun. This is because of a property of orbiting objects called angular momentum. Angular momentum is a sort of measure of how much something is rotating around a central point. The reason that this is important is that one of the fundamental principles of physics is that angular momentum must be conserved. For something to fall into the Sun, it has to lose nearly all of its angular momentum somehow, so that it is falling straight at the Sun. If it is off just slightly, instead of falling in, the asteroid will just fall very close, and then slingshot back out far from the Sun. It is probably quite rare for an asteroid to lose all of its angular momentum and fall straight into the Sun. However, there might be quite a few that lose enough to get close to the Sun and vaporize.
As I mentioned, we have never seen an asteroid come close to the Sun and vaporize. That's because asteroids are small rocks or pieces of metal, and even when they are being vaporized they are hard to see. Comets, on the other hand, give off huge glowing plumes of gas when they get close to the Sun, making them very easy to spot. The SOHO satellite has detected more than 1100 comets known as "sun grazers." These are comets that get close enough to the Sun to glow very brightly and show up in the SOHO images. Some of them disintegrate while others survive the close call and sail back out to the outer solar system until their next orbit brings them back. Check the SOHO Comets website for more information. You can even help discover new sun-grazers by studying the data yourself!
This page was last updated by Sean Marshall, on July 18, 2015. |
by Silja Haapanen
The purpose of this study was to inspect the possibility of using neutrinos for communications for military submarines. The basic design of this idea is based on the Navy's current submarine communications system, specifically the Extremely Low Frequency (ELF) Radio Communications; the main difference is the use of neutrinos in place of radio waves. The ELF Radio Communications program is a fixed, shore-based transmitter- a 222-km dipole antenna located in Wisconsin. Simple, one-sided commands can be transmitted to submarines operating in certain range of depths. The transmission frequency is 40-80 Hz; this is a great improvement from the earlier communications systems which used higher radio frequencies. The longer wavelengths can better penetrate the ocean, and there is an overlap between the operational depth of the submarines and the depth at which messages can be received. In other words, the submarines do not have to surface just to receive communications.
Neutrinos have many properties that would make them superior even to the extremely low radio frequencies. Because neutrinos are nearly unaffected by matter, a neutrino beam could traverse directly through the earth from the transmission site to the submarine. A directional beam would allow confidential information to be passed only to the intended recipient. Neutrino communications would also be totally jam-proof. As an additional benefit, a neutrino message could be received in the deepest of waters, leaving a submarine less vulnerable to enemy attacks.
The neutrino beam would be produced by an accelerator, preferably built at some well- protected location in the U.S..
The neutrinos would be produced via pion decay, similarily to the method employed by current accelerator beam neutrino experiments. Colliding a proton beam with a target produces positive pions, π+. Because the pions are electrically charged, they can be focused by using a magnetic field (Figure 1). The field is created by devices called magnetic horns. The horns are two coaxial barrels made out of electrically conductive material and use a large current in the order of 200 kA; they produce a toroidal magnetic field in the region between the barrels. The pions will be focused into a highly parallel beam. They will then decay to a muon and a muon neutrino, and the muon in turn decays into a positron, neutrino and an antineutrino:
The end result is a highly directional beam of mostly muon neutrinos. The beam could be turned off and on rapidly; this could be used as a means of modulation. It would be fairly easy to produce a Morse code- type binary code.
Figure 1. The magnetic horn system of the K2K experiment.
The concept of a neutrino beam traversing through long distances of earth is already being implemented for a purpose quite different from underwater communications. Very long beamline neutrino experiments, such an K2K and MINOS, are using accelerator- created beams for neutrino research; specifically, to increase understanding of neutrino oscillation. A well- understood and controlled neutrino source, set at a fixed distance from a detector, will enable a more accurate measurement of neutrino oscillation parameters.
K2K is a Japanese experiment whose name stands for " KEK to Kamiokande" , the locations of the accelerator and the detector, respectively. The distance between them is 250 kilometers. KEK is a 12GeV proton synchrotron accelerator. The neutrino beam is produced by pion decay as described above; the pion decay pipe is 200 meters long, and the average energy of resultant neutrinos is 1.3 GeV.
The Kamiokande detector is a liquid scintillator Cherenkov detector which was originally built for observing proton decay. It later discovered that the atmospheric neutrino deficit problem can be explained by neutrino oscillations. Kamiokande has taken on a new assignment as a detector for accelerator- created neutrinos. Its detection volume is 22 kilotons, or 22,000,000 kilograms.
Figure 2. The K2K experiment
Another interesting long baseline experiment is MINOS (Main Injector Neutrino Oscillation Search). Its 731-kilometer baseline runs from Fermilab to Soudan Mine in northern Minnesota. At MINOS, the pion decay region is 675 meters long. MINOS features an adjustable magnetic horn system, which can select 3 different neutrino energy ranges: 3, 6, and 15 GeV. A look at MINOS' 5.4 kiloton detector event rates gives an idea just how challenging it is to detect neutrinos: the charged current event rates are 10,000 events/year for the 15GeV neutrinos, 5,000/year for the 6GeV and a modest 700/year for the 3GeV neutrinos.
Detection of the messages would happen via Cherenkov radiation, and the detection medium would be the water around the submarine. The submarine would be equipped with an array of photomultiplier tubes. The phototubes would pick up the Cherenkov light that is produced when a neutrino collides with a nucleon in the target volume; a pair of electrons created in the collision travel through water at a speed faster than light speed in the medium.
The transparency of sea water peaks at the blue wavelength of visible light.
Cherenkov radiation can be detected to about 100 meters from the source.
The 'Ohio' class submarine, which is used to carry both ballistic and guided missiles, is 171 meters in length. A reasonable phototube array would be about 100 meters long, an extension to be dragged behind the submarine.
The detection volume would therefore be a cylinder with 100m radius and 100m length. This translates to π×109 liters, or p ´106 tons, of sea water. Assuming that sea water mostly consists of H2O, using the conversions 1 liter/kilogram and 18 grams/mole, the number of target nucleons in this volume is 1.9´1035 nucleons.
Total number of events can be calculated from the equation
N = (n)(σ)(I) (2)
where n = number of target nucleons in the target volume
σ = beam particle cross-section
I = beam intensity (flux/area)
The beam divergence and the distance between the transmitter and the submarine will affect the intensity. The distance would, for practical use, be of the order of 1000 kilometers or 106 meters. The number used for the calculations in this study is 5000 kilometers.
The beam divergence is decided by the pion momentum. The pions will have longitudinal and transverse momentum components. The transverse component is always of the order 0.5GeV, regardless of the magnitude of the longitudinal component. The beam divergence angle is given by
tan(θ) = pt/pl (3)
where pt = transverse pion momentum
pl = longitudinal pion momentum
The larger the total momentum of the pions, the smaller the spread in the beam. For pions with a longitudinal momentum of 50 GeV, the beam, at a distance of 5000 kilometers from the source, has spread to a radius of about 158 kilometers. For longitudinal momentum of 100 GeV, the spreading radius would only be about 79 kilometers.
The neutrino cross section/energy versus energy is linear and has a value 0.67x10-38 cm2/GeV. A neutrino beam of 10GeV therefore has a cross-section of 6.7x10-29cm2
The intensity has to be taken per area; for (3) to be dimensionally correct, the intensity is given as (particles)/(second)(cm2).
The area is given by
A = p D2 sin2q (4)
where q = beam divergence angle
D sinq = radius of the incidence area.
For the case where the pions have 50GeV longitudinal momentum, A = 3.1x1017cm2.
Inserting this value into (2), together with the known values of s and n, gives
N = d(flux) (5)
where d = (n)(s)/(A), with the numerical value (1.9´1035) (6.7´10-29) (3.1´10-17) =3.9´10-10 .
From this one can easily see that to have the possibility to detect even just one event per second, a flux in the order of 1010 amperes is needed!
Figure 3. The geometry of neutrino beam communications. The sizes are, of course, very exaggerated...
In order to point the neutrino beam towards the desired direction (i.e. the submarine), the pion beam could be deflected by using a magnetic field. The magnetic force on a charged particle in an external magnetic field is given by
FB = qnAL(v ´ Bo) (6)
where q = charge of the particle
n = number of particles in the volume element
A = cross-sectional area of beam
L = length of magnetic field region
v = velocity of the particle
Bo = magnetic field.
Or, more simply for one particle,
FB = q(v ´ Bo) (7)
A charged particle moving in a magnetic field will follow a helical path. Newton's 2nd Law can then be written as
mv2/r = qvBo (8)
r = mv/qBo (9)
These are familiar cyclotron motion equations, where r = radius of curvature of the particle's path and m = the particle's mass.
While the pion is inside the region of the magnetic field, it will travel a helical arc length S.
Figure 4. A particle in a magnetic field
The purple line represents the path of the particle, S. The magenta line is the amount by which a particle gets deflected in the transverse direction. The particle is deflected by angle a; it can easily be seen that if we call length of the magnetic field L , and the transverse displacement D,
From (8) and some trigonometry,
D = (mv/qB)(1-cosa) (11)
The strength of the magnetic field is given by
Bo = mv (1-cosa)/q(D-Ltana) (12)
m=Mass of pion = 139.5MeV= 2.48315x10-28kg
q = Charge of pion = 1.6022x10-19C
L= length of magnetic field region = 100m
If pion velocity is of the order of 1:10c, deflecting its path by 30 degrees over 100 meters would require 0.8 ´10-2 T; in other words, 0.8 T per meter.
Neutrino communications may appear at first a tempting and interesting alternative to radio transmission. However, the very aspects which give the neutrino its benefits over electromagnetic waves, also cause the idea to fail. Neutrinos hardly interact with matter; although it would be beneficial for the military to be able to send communications to their submarines right through the earth and to the deepest places of the oceans, the difficulty of receiving the message is where the idea falls apart. Merely creating a beam with enough intensity to reliably transfer information is an impossible task. There are also a host of other problems. Bending the pion beam requires a magnetic field and only works for certain angles; aiming the beam in various directions would require the entire accelerator apparatus to be turned around in some manner. This would be very difficult, if not impossible. Smaller practical problems arise from the phototube array. Dragging a 100- meter extension behind a submarine would not only slow down the vessel and increase its energy consumption, but electrical activity from the phototubes would be highly visible to any enemy. Therefore, even if the phototube array was shielded only to be deployed when an incoming message is expected, it would not be much of an improvement from the times when a submarine had to surface in order to receive communications.
Regardless of the seeming appeal of the concept, neutrinos as a means for communications appears impossible in practice. It would take nothing less than some type of a technological revolution to make it reality; until then, neutrino communications will remain the stuff of science fiction.
Perkins, Donald H.: High Energy Physics, 4th edition
Thanks to Professor Jay Hauser and Professor David Saltzberg of the Physics Department at the University of California, Los Angeles. |
The Montessori Method and Modern Child
A limited time offer! Get a custom sample essay written according to your requirements urgent 3h delivery guaranteedOrder Now
In approximately 400 words for each topic, summarize Dr Montessori’s approach and discuss how Montessori’s views on these topics are regarded in child development texts today.
(a)The Role of the Environment
(b)Children’s Diet and Exercise
(c)Nature in Education
(d)Education of the Senses
Dr Montessori also expresses the need for ‘Scientific Pedagogy’, i.e. using scientific methods (especially observation). In the conclusion to this assignment, you should address her theory of scientific pedagogy, and compare it to Vygotsky’s ‘Zone of Proximal Development’. (a)The Role of the Environment
In Montessori philosophy there are three leading factors that make up the methodology: the environment, including all the materials; the directress, and the child. The prepared environment should be established upon one fundamental base: “the liberty of the pupils in their spontaneous manifestations” (Montessori, 2002). It is in freedom that a child reveals himself and uses his environment to grow. Socio-emotional development also has a big focus in Montessori’s philosophy. McDevitt & Ormrod agree by stating “Environmental influences are evident in the development of self-esteem and motivation (McDevitt & Ormrod, 2012). The Montessori environment also allows freedom in many aspects, including freedom of movement as the children are allowed to move around the classroom as well as outside the classroom. All materials are designed with a self-correcting control of error and the correct sizes.
All the material should be kept orderly and furniture should be child sized, such as chairs and tables so children can move them. Child size washstands, shelves and cupboards should also be provided for Practical Life Exercises. Nature is also a vital part of the Montessori environment and a garden is highly recommended. Lessons about the plants, insects, seasons and fresh food are essential. Montessori strongly believed that “The child must draw from nature the forces necessary to the development of the body and of the spirit. (Montessori, 2002). The Montessori outdoor environment is prepared just as carefully as indoors. Outdoor areas require space for running, jumping, throwing, climbing, lying, sitting, balancing, watching, building, digging, playing with water, sand and exploring. The basic concept behind Montessori’s educational work was that of providing children with a suitable environment in which to live and learn. Numerous theories of development have influenced educational practices during the 20th century (Aldridge, Kuby, & Strevy, 1992).
But most developmentalists agree that the environment is an important force in development. Vygotsky was the first proponent of the contextual view, but Urie Bronfenbrenner (1917), is its best proponent today with the ecological systems theory, based on the nature vs. nurture idea. Bronfenbrenner believed development of a child was determined by the relationships among the environment or environmental systems around them. For Bronfenbrenner, “development is a complex interaction of the changing child within a changing ecological context.”(Mossler, 2011).
Through careful observation of children all over the world, Dr Maria Montessori developed those guidelines for the preparation of the child’s environment. These stimulate the child’s ever growing need to perfect the skills necessary to life and to order the sensorial impressions he has gathered from the environment and put them to use daily.
(b)Children’s Diet and Exercise
Physical safety and a healthy diet are essential in raising healthy children. Children’s growth, behavior and development can be affected by their diet. A balanced diet will help children to remain healthy as well as to grow. When Montessori first opened the “Casa dei Bambini”, the “local standards of child hygiene were not prevalent in the home” (Montessori, 2002). Therefore, Montessori believed that a large part of the at least the child’s diet could be trusted to the school (Montessori, 2002) in order to protect the children’s development. Nowadays a lot of nurseries and pre- schools provide appropriate food to children according to the child’s age and development, includes a wide variety of nutritious foods, following strict Dietary Guidelines.
Montessori also believed a diet of little children should be rich in fats and sugar (Montessori, 2002). Current research has shown children’s nutrition plays a very big role in their development, health, and their food choices later in life. Studies also show that children are being fed diets high in fat, sugar, and salt, and that mothers are confused as to what they should be feeding their children (Venter, C.C. & Harris G.G. 2009). Diets high in fat and sugar have been linked with diabetes in small children, and contribute to the ongoing obesity problem. Parents should feed their children a healthy diet which consists of foods from all five food groups. (Vegetables and legumes/beans, fruit, grains and breads, lean meats and dairy / eggs). Parents should introduce healthy foods in early childhood because in doing so, it allows the child to develop healthy eating habits. Adequate nutrients, supportive social relationships and exploration in the physical environment are essential to normal growth.” (McDevitt & Ormrod, 2012). Being active is important too.
Walking, climbing, dancing, running, swimming and sports build strength into bones and muscles. Being active is also the natural way of balancing the food intake. The more active children are, the more likely they are to have healthy growth. Montessori however did not appreciate gymnastics for psychomotor development, stating “the guiding spirit in such gymnastics is coercion, and I feel that such exercises repress spontaneous movements and impose others in their place” (Montessori, 2002). She designed playground equipment based on child play observation and classroom furniture was all designed with body proportions of age ranges in mind. She also created / offered pieces of gymnasium apparatus such as climbing wires or frames, trampolines (created by Seguin), a low wooden platform for jumping and rope ladders. The Montessori Method is one that supports the importance of play and movement, and is currently being applied to children of varying cultures around the globe. (c)Nature in Education
As our lives become more technologically advanced and driven many children have very little access to a natural habitat in their neighborhood environment. Young children develop their sensory, cognitive, gross and motor skills while in relationship to the natural world. Maria Montessori had a profound respect for nature and believed that it should play a large part in the prepared environment as children are naturally attracted to the nature. “Montessori emphasized the importance of contact with nature for the developing child. Man still belongs to nature and, especially when he is a child” (Lillard, 2011). It is for this reason that all materials used in the environment should be of natural origin as far as possible and not synthesized or plastic. The child needs to have materials that represent the real world, bringing him into closer contact with reality to show him the limits of nature and reality.
The care of plants and flowers with a small garden and animals such as rabbits, gold fish is also recommended in the class for contact and understanding of nature. There is also only one of each activity in the environment, this shows the reality of nature where the child cannot always have whatever he wants but will have to develop patience and respect for the materials and the other children working around him. The Montessori outdoor environment should be designed to appeal the natural desire of the child to explore the world around him. It should be with the natural elements such water, rock, wood, sand, stone , grass and bark to facilitate further exploration of nature. A pond with waterfall can also be made to the children know about aquatic plants and animals, studies about water pollution and water conservation. The key to the success of our outdoor environment is preparing the environment with purposeful, engaging activities that are hands-on, real and practical. We should structure the environment in such a way that children can make discoveries on their own.
“Children appear to have a natural curiosity about their world. Factors such as motivation and confidence in their own abilities, depend largely on experiences with the environment”. (McDevitt & Ormrod, 2012). Following that trend, natural play spaces are currently growing in childcare centers. It is becoming more popular everywhere. A natural play spaces or playground is a space where there are no manufactured play structures. It is all based on nature and using nature as materials for the playground. These may include sand pits, water, vegetation, boulders or other rocks, textured pathways, etc. Current research and books agree with Montessori that “direct exposure to nature is essential for healthy childhood development and for the physical and emotional health of children and adults.” (Louv, 2008). (d) Education of the Senses
A child’s journey in life begins right from the time that he is in his mother’s womb, increasing in size and developing his physical structures.
Once he is born and he leaves the comfort of his mother’s womb, he must go through a period of reconstruction, to develop in movement, speech and other areas. However, the child does not possess a fixed way of behavior or any natural way of acting or thinking and controlling in advance, like those in animals who are immediately able to walk or even run as soon as they are born. But he has patterns of mental power, unfolding. He gradually unfolds to exhibit the characteristic of his kind in movement, speech, and action, being guided by an inner guide. According to Maria Montessori, this is the real identity of the child, the real revelation. “It is necessary to begin the education of senses in the formative period, if we wish to perfect the sense development with the education which is to follow” (Montessori, 2002). “Infants can learn a lot about the world from their sensory and perceptual abilities” (McDevitt and Ormrod, 2012).
Sensorial comes from the words sense or senses and Montessori believed that “The first of the child’s organs to begin functioning are his senses” (Montessori, 2012). Sensorial education helps develop a child’s intellect and we can further it by education, building upon experiences and thought processes. The aim of the sensorial work is to make a child gain clear, conscious, information and to be able to analyze it. “The development of the senses precedes that of the higher intellectual powers, and in the child between three and six years of age, it is in the formative period. We can then help the development of the senses during this very period, graduating and adapting the stimuli just as we ought, to aid the acquisition of speech, before it is completely developed. All the education of early childhood ought to be based on this principle – to aid the natural development of the child.” (Montessori, 2004). They learn about the world by touching, tasting, smelling, seeing and hearing. Through his senses, the child studies and understands his environment.
I would like to finalize this assignment discussing Montessori’s “Theory of scientific pedagogy”, based on her own statement: “Truly there is an urgent need today of reforming the methods of instruction and education, and he who aims at such a renewal is struggling for the regeneration of mankind.” (Montessori, 2004). First, Vygotsky and Montessori have a lot in common. They were both trained as doctors and both worked with children with special needs before they went onto develop their own view of children’s development and learning. Secondly, both of them are very acute observers of children. For Montessori, she made observation a keystone of her method of education. Thirdly, social interaction between children and adults is a key part of learning for both. Montessori focused on the work of the teachers, based on scientific observations of the child’s development, constantly carried out and recorded by the teacher.
These observations are based on the liberty of children, made on the level of their concentration, the introduction to and mastery of each piece of material, the social development, physical health, etc. Teachers created and maintained a work cycle for them to use and followed up with these observations for individual children. This concept is related to an important principle of Vygotsky’s work, the Zone of Proximal Development. This is an important concept that relates to the difference between what a child can achieve independently and what a child can achieve with guidance and encouragement from a skilled partner (or teacher). Vygotsky sees the Zone of Proximal Development as the area where the most sensitive instruction or guidance should be given – allowing the child to develop skills they will then use on their own – developing higher mental functions. “Instruction is most effective when it is individually tailored to the children’s unique strengths and limitations.” (Bodrova &Leong, 2007).
Vygotsky also views interaction with peers as an effective way of developing skills and strategies. He suggests that teachers use cooperative learning exercises where less competent children develop with help from more skillful peers – within the zone of proximal development. We can then conclude the focus on the importance of the concept of the “zone of proximal development” as a sign to teachers of a limit to their knowledge of children, and an admonition to teachers to be more observant and less directing concerning learning activities of the child, based on “the fundamental principle of scientific pedagogy – the liberty of the pupil – such liberty as shall permit a development of individual, spontaneous manifestations of the child’s nature.” (Montessori, 2002).
Aldridge, Kuby, & Strevy, (1992). Developing a metatheory of education. Aldridge, Jerry & Renitta L. Goldman (2007). Current Issues and Trends in Education. Bodrova, Elena & Leong, Deborah (2007) – Tools of the Mind: The
Vygotskian Approach to Early Childhood Education. Louv, Richard (2008). Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder Mc Devitt Teresa & Ormond, Jeanne Ellis. (2012). Child Development and Education Montessori, Maria. (2002). The Montessori Method.
Montessori, Maria. (2004). The Discovery of the Child
Montessori, Maria. (2012). The Absorbent Mind
Mossler, R.A. (2011). Child and adolescent development. Bridgepoint Education, Inc. Paula Polk Lillard (2011). Montessori: A Modern Approach
Venter, C.C., & Harris, G.G. (2009). The Development of Childhood Dietary Preferences and Their Implications for Later Adult Health. Nutrition Bulletin, 34 (4), 391-394. doi: 10.1111/J.1467-3010.2009.01784.x |
Understanding the Concept of Mind Mapping
Mind mapping, at its core, is a creative and logical means of note-taking that literally "maps out" your ideas. It provides a framework for organizing complex information in an easy-to-understand, visual format. With mind mapping, ideas are connected directly to the central concept, and other ideas branch out from there, creating a radiant structure that mirrors the brain's natural pattern of associative thinking.
Benefits of Creating a Mind Map
Creating a mind map isn't just about simplifying complex ideas; it also boosts organization and creativity. Here are some key benefits:
- Improved Organization: Mind maps break down large chunks of data into smaller, manageable segments. This hierarchical structure aids in organizing thoughts and improving clarity.
- Enhanced Creativity: Mind maps engage both analytical and artistic parts of the brain, thereby increasing creative thinking. They serve as a canvas for freely exploring ideas without constraints.
- Increased Comprehension: The visual nature of mind maps enhances understanding by displaying the relationships between different topics, facilitating better problem-solving and decision-making.
- Higher Efficiency: Mind maps streamline project planning and brainstorming sessions by capturing thoughts quickly and easily, thereby increasing productivity and efficiency.
Components of a Good Mind Map
Identifying the Essential Elements of a Mind Map
All effective mind maps share certain essential elements:
- Central Idea: This is the core idea or theme around which the entire map is built. It is usually placed in the center of the map to denote its significance.
- Branches: Branches extend from the central idea and represent sub-topics or related ideas.
- Keywords: These are used to denote important points or concepts within the branches.
- Colors and Images: Colors help differentiate and group ideas while images act as visual aids to enhance memory and comprehension.
- Associations: These are lines or arrows that show the relationship between different ideas or points.
Understanding the Structure and Flow of a Mind Map
A well-structured mind map will naturally flow from the central node to its branches, depicting relationships and hierarchy between different nodes. The visual nature of the mind map helps in understanding the flow of ideas intuitively.
Each branch should ideally start with a keyword that encapsulates an important concept or idea related to the main topic. From this keyword, related ideas or sub-topics should radiate outward, with color-coding used to highlight different branches or group related concepts together.
In conclusion, mind mapping is a powerful tool for visually organizing thoughts, ideas, and information. It engages both hemispheres of our brains, thereby improving creativity, comprehension, and memory retention.
How to Create a Mind Map: Step-by-Step Process
A mind map can be a powerful tool for synthesizing and organizing information. With a few simple steps, you can transform a confusing array of thoughts into a clear, visual diagram.
General Process of Creating a Mind Map
Identify Your Central Idea
Start by identifying the main concept or theme. This central idea will serve as the root of your mind map and everything else will branch out from it.
Add Main Branches
Draw lines branching out from the central idea. Each branch should represent a major subtopic related to your main idea. Label each branch with a keyword.
From each main branch, draw additional lines representing more specific points or ideas. Label each of these "sub-branches" with a keyword.
Incorporate Visual Elements
Use different colors, symbols, or images for different branches. These visual elements can help to differentiate ideas and improve memory recall.
Review and Refine Your Map
Look over your mind map to check for any errors or areas of improvement. Make necessary revisions until you are satisfied with the result.
How to Create a Mind Map Using Boardmix
Mind mapping is a powerful brainstorming tool that allows you to visually represent and connect complex ideas. While there are various ways to create a mind map, using a dedicated software like Boardmix can make the process easier and more efficient.
Boardmix is a versatile, user-friendly platform for creating dynamic, interactive mind maps. It's an intuitive digital tool designed to help users brainstorm ideas, visualize thought processes, and collaborate in real-time. With Boardmix, you can create comprehensive mind maps that include text, images, and colors. Its easy-to-use interface and powerful features make it a preferred choice for businesses, educators, and students alike.
Creating a mind map in Boardmix involves a few simple steps:
Step 1: Open Boardmix
Begin by logging into your Boardmix account. If you don't have one, create a new account.
Step 2: Start a New Board
Once you're logged in, click on "New board" on the dashboard, then you’ll see an infinite canvas, on which you can create anything you want.
Step 3: Identify Your Central Idea
Click on the mind map icon on the left panel, choose a style you like to add the central node. This node will represent the main idea or topic of your mind map. Type in your main concept.
Step 4: Add Branches to Your Mind Map
Next, add branches to your central idea. These branches will represent major subtopics related to your main idea. To add a branch, select the central node, then click on the plus symbol that appears beside it. Type in the subtopic on each branch.
Step 5: Add Sub-branches
Similarly, you can add sub-branches or child nodes that represent more specific points related to each branch. Select a branch and click the plus symbol to add a sub-branch.
Step 6: Customize Your Map
Boardmix allows you to customize your mind map to suit your preferences. You can use different colors, fonts, and images to differentiate between ideas. You can also reposition nodes and branches as needed.
Step 7: Share Your Mind Map
Boardmix also gives you the option to share your finished mind map with others via email or a shareable link. In this way, you can work together with other stakeholders and brainstorm together for better ideas.
Using Boardmix, you can create comprehensive mind maps that help organize information visually, improve recall, and generate new ideas. With its intuitive interface and robust features, mind mapping becomes an enjoyable and fruitful process.
Practical Examples and Applications of Mind Maps
Example 1. Mind Map of Leonardo da Vinci
Example 2. Mind Map of Western Music Mind Map
Example 3. Mind Map of Oral Communication
Example 4. Mind Map of Time Management Strategy
Example 5. Mind Map of Career Goals
Tips for Taking Your Mind Maps to the Next Level
Mind mapping is a versatile tool that can unlock creativity, enhance memory, and streamline problem-solving. But like any tool, its effectiveness hinges on how well you wield it. Below are some tips that can help you take your mind maps to the next level:
- Start with a Central Image
Images are processed more efficiently by the brain than words. Start your mind map with a central image that represents the core idea, it's visually stimulating and sets the stage for more creative thinking.
- Use Colors Liberally
Colors not only make your mind map more engaging but also group information, enhance recall, and stimulate creative thinking. Assign different colors to different branches or topics for visual segregation.
- Branch Out Logically
The structure of your mind map should reflect the natural flow of the subject matter. Main branches should stem directly from the central idea, with sub-branches for specific details.
- Limit Each Branch to One Keyword or Idea
Cluttering a branch with multiple ideas or lengthy sentences can make your mind map confusing. Instead, distill each point down to a single keyword or phrase.
- Use Images and Symbols
Images and symbols can convey complex ideas quickly and effectively, saving space on your mind map and aiding recall.
- Prioritize and Sequence Information
Just like reading a book, a mind map also has its narrative. Arrange branches so they form a logical sequence or show priorities.
- Make Your Mind Map Interactive
If you're using a digital mind mapping tool, utilize features such as hyperlinks, attachments, or comments to add depth to your map.
- Collaborate with Others
Mind maps can be valuable collaborative tools. Working with others can add a fresh perspective, stimulate new ideas, and create a more comprehensive picture of the topic.
- Regularly Review and Update Your Map
A mind map is not a static entity - it should evolve as your understanding of the topic grows. Regularly review and update your map to ensure it remains relevant.
- Practice Regularly
Like any other skill, your proficiency at mind mapping will improve with regular practice. Try incorporating it into different aspects of your life - work projects, personal goals, learning new subjects, etc.
These tips can elevate the effectiveness of your mind maps and open up new avenues for creativity, problem-solving, and learning. No matter if you want to create a mind map by yourself or using tools like Boardmix, these tips can help a lot. |
Properly set off by trained technicians, fireworks are safe and make a beautiful display against the evening sky. But careless use of fireworks by untrained people can lead to serious injury and property damage. For this reason, the general use of fireworks is often restricted by law.
The scientific name for fireworks is pyrotechnics, from Greek words meaning “fire arts.” The propelling and exploding force in fireworks comes from a combination of saltpeter, sulfur, and charcoal. The same substances, used in different relative quantities, also make up gunpowder. Historians believe that fireworks were invented before gunpowder, that gunpowder came as a result of experimenting with different quantities of the same substances in the mixture. Thus fireworks were in existence before guns, and the first firearms hurled flaming materials.
Fireworks were manufactured in Italy as early as 1540. By the 1600s they were widely used in England and France. Most of the varieties known today, such as display rockets, aerial bombs, pin wheels (or Catherine wheels), and fountains were used in this early period. For centuries the Chinese set off fireworks to celebrate their holidays. It was not until the middle of the 19th century that the United States adopted the custom of shooting off fireworks to celebrate Independence Day.
Nearly all fireworks have the same basic parts. The starting powder catches fire; the bursting powder causes the final explosion; and the quick match leads the spark of fire from one point to another. Resin, camphor, gum, and similar substances control the strength of the explosion. The brilliant colors of fireworks come from bright-burning metallic salts. Sodium salts give a deep yellow color; calcium, red; strontium, crimson; barium, green; and copper, green and blue. Magnesium and aluminum provide an electric-white effect. Chlorine compounds are used to intensify or brighten colors.
Fireworks that soar into the air get their power from expanding gases that rush out and push much as they do in a jet engine. The gases are produced by the rapid burning of the saltpeter-sulfur-charcoal mixture. (See also jet propulsion; rocket.) Pin wheels are made by coiling long paper tubes, which are lightly filled with a fast-burning mixture, around a frame that spins on its axis. Flowerpots use the principle of the Roman candle, but the pot stays on the ground.
Fireworks serve as the basis of many useful products. Railroad trains, trucks, and cross-country buses carry fusees, or red flares, which are placed behind stalled vehicles to avert collisions. Airplanes carry parachute flares to light up the ground area for forced landings at night. Rockets, Roman candles, and blue Bengal lights were long used as signals between vessels at sea and from ship to shore, and rockets still are used as signals of distress. In World War I, advancing infantry detachments sent information to the artillery in the rear by rocket signals. In World War II, rockets projected from airplanes, ground vehicles, and ships were used by the combatants fighting on both sides.
Unfortunately, the careless handling of fireworks causes many injuries every year and even occasional deaths. Property damage in the United States may exceed more than 1.5 million dollars annually. Losses due to fireworks have been reduced through organizations interested in fire prevention and human welfare. Such groups urge the adoption of laws that forbid or limit the sale of fireworks to retail purchasers. These laws usually permit the display of fireworks for special events under proper supervision. |
In an ever-evolving educational landscape, where we seek to nurture young minds and support their emotional development, the concept of calming corners has emerged as a powerful tool. These dedicated spaces in classrooms are more than just a trend; they are essential for promoting emotional resilience and well-being in our students. Let’s delve into the importance of calming corners in classrooms and the transformative impact they can have.
Empowering Emotional Regulation
In today’s fast-paced world, students face a multitude of stressors, from academic pressures to personal challenges. Calming corners provide a safe haven for students to manage their emotions and find solace when needed. By teaching children to recognize and cope with their feelings, we empower them to navigate life’s ups and downs with grace and resilience.
Enhancing Focus and Learning
We all know that a calm mind is a receptive one. Calming corners help students regain focus and concentration, ensuring a more conducive learning environment. When a child can effectively self-regulate their emotions, they are better prepared to engage with lessons, absorb knowledge, and excel academically.
Conflict Resolution and Inclusivity
Calming corners serve as spaces where students can de-escalate emotional conflicts, learn to resolve disagreements, and practice empathy. Moreover, they promote inclusivity by accommodating students with diverse emotional needs, creating a classroom atmosphere that values and supports every individual.
Supporting Mental Health
In the wake of the challenges posed by the COVID-19 pandemic, the importance of mental health in education has become increasingly evident. Calming corners contribute to students’ overall well-being by offering a sanctuary for stress and anxiety management. By providing this support, we reduce the risk of mental health issues in our young learners.
Positive Behavioral Management
Calming corners can be instrumental in positive behavioral management. By reinforcing positive choices and providing an alternative to punitive measures, they help reduce disruptive behaviors and promote a culture of responsibility and self-awareness.
Life Skills for the Future
Teaching emotional regulation and self-soothing techniques in the classroom equips students with invaluable life skills. These skills extend far beyond the confines of the school and help them navigate the complexities of adulthood with a strong emotional foundation.
Implementing calming corners in our classrooms can create a compassionate and effective educational environment that nurtures not only young minds but also their hearts. Together, we can empower the next generation with the tools to thrive emotionally and academically.
Let’s make our classrooms places where emotional resilience and well-being are not just aspirations but integral parts of the learning process. Your commitment to this vision can shape the future of our students and their capacity to thrive in a rapidly changing world.
In this video, Yoga 4 Classrooms owner Sarah Kirk, shares how to use calming corners at school and at home to help young people learn emotional regulation and emotion identification.
Purchase the Yoga 4 Classrooms Activity Cards, the Yoga for Children Activity Cards, or the Breathing Ball mentioned in the video here: https://yoga4classrooms.com/products/ Subscribe to our newsletter for more tips, resources, and inspiration: https://www.yoga4classrooms.com And don’t forget to follow us on social!
- Facebook: www.facebook.com/yoga4classrooms
- Instagram: www.instagram.com/yoga4classrooms
- TikTok: www.tiktok.com/@yoga4classrooms
With peace and positivity,
Sarah Kirk CEO of Y4C: https://yoga4classrooms.com/about/
Contact us: https://yoga4classrooms.com/contact/ |
Discover The Extraordinary Power Of Song For Teaching Mathematics. See For Yourself That Songs Can Help Children Learn While Having Fun.
Unique resources to help secure the basics and develop mastery for children aged 3-11
Ideal for busy teachers - immediately integrates into your lessons
Engage children in active, creative and practical mathematics that bring learning to life
Equip children with deep mathematical understanding - even on the more challenging concepts
Support children with differing needs and expectations
Discover how other teachers benefit
Ready-to-use videos, songs, stories and activities for instant impact in your classroom
Number and Place Value
Conservation of number, number sense, arrangement styles, part/whole diagrams, counting on and back, partitioning, sequences, rounding, intervals across zero etc.
Addition and Subtraction
Number bonds, balance, addition and subtraction structures (e.g. comparison, partitioning), jotting and mental strategies, formal strategies, number lines, vocabulary, functions etc.
Multiplication and Division
Doubling, halving, times-tables up to 12×12, grouping, sharing, chunking, sequences, vocabulary, structures, functions, factors, primes, squares and cubes etc.
Halves, quarters, counting, fractions, decimals and percentages, equivalences, fractions to 1, proper and improper fractions etc.
Vocabulary, units, dates, times, events, money, clocks, perimeter, area etc.
Position and direction
Vocabulary, left & right, concepts, movement, compass points etc.
2-D and 3-D shapes, patterns, polygons, symmetry, catagorising quadrilaterals, angles etc.
Ratios, proportions, relative sizes and missing values, comparisons, percentages of amounts etc.
And even more…
Statistical diagrams, problem solving strategies, algebra, mental recall, celebrating mathematics etc.
So… what makes Number Fun different?
After 30 years of experience we know a thing or two about what really works in the classroom.
We’ve taken the common challenges teachers tell us they continue to face teaching primary mathematics, and we continue designed resources to help with all of them.
Any of these seem familiar?
See how Number Fun helps
Children do not retain mathematical facts and concepts - like simply remembering times-tables and and their number bonds
Number Fun songs, coupled with powerful visualisations, provide a tried and tested approach to improved retention and comprehension (and yes, times-tables are in there)
Children are unable to apply mathematics to the real world scenarios
Dave's Number Fun songs are brought to life by an array of fun-filled characters and stories that relate mathematics to everyday life
Finding fun, interactive and novel ideas to make each lesson engaging and meaningful
Teachers consistently tell us that children love the interactive and fun-filled nature of the Number Fun songs
Managing the demands of the curriculum and spending hours searching for and preparing high quality resources
The Number Fun Portal comes with over 200 ready to use songs and supporting resources categorised by domain and age.
Time efficient ways of helping children learn and remember mathematical concepts and facts
Number Fun utilises the power of music, helping children to learn mathematics in a brain-friendly way. Children just can't get it out of their heads (as Kylie would say!)
When teachers build Number Fun into their lesson planning through the year, they frequently tell us how deeply satisfying it is to see the children enjoying and progressing in their learning.
How do I use Number Fun?
Number Fun is designed from the ground-up to fit into any lesson in any maths curriculum or scheme. It provides complementary content using songs and activities to deeply embed mathematical concepts learnt by children the world over.
When you’ve selected your resource you can get started instantly and then reinforce the learning points by tailoring the content to your needs.
First, get started straight out of the box
Later, personalise the content to reinforce learning
Here’s some sample resources for each age group that you can try out – and a video guide to walk you through the process.
Ready to join?
Why is Number Fun based around songs?
There is a significant body of research demonstrating that music and song aid learning. Allen & Wood (2012) state that for teachers, music is, ‘one of the most powerful teaching tools at their disposal’.
Dave Godfrey, the founder of Number Fun, completed his master’s dissertation on the use of songs as a powerful tool for teaching and learning mathematics across the primary age-range.
Is there a parent version of Number Fun?
Yes. Check it out here.
I'm not musical - will Number Fun work for me?
Not a problem – the songs and videos are all set up and ready to roll, no musical expertise is necessary!
My school uses a scheme of work. Can I still use Number Fun?
Number Fun is not a scheme of work. It is a creative and powerful tool that can enhance the delivery of mathematics within any scheme of work. Each song has been written to support a specific objective from the mathematics curriculum – they sit independently of any one scheme or programme.
How do the songs fit into a standard mathematics lesson?
Number Fun songs can be used at any point in a mathematics lesson. Songs can be a great way of engaging children in learning and when introducing a topic. Number Fun songs can become the focus of exploration as children seek to understand the mathematical concepts embedded within them. Songs are a wonderfully non-threatening way of extending and assessing children’s learning. Songs can also be used at other points in the day to reinforce learning or to link to other areas of the curriculum.
What are the three different types of Number Fun songs?
In my masters research I indentified three distinct types of Number Fun songs:
1) Concept Songs – these help children learn a mathematical concept by making sense and finding meaning in the song. Typically these songs come in the form of a story.
2) Input/Output Songs – these songs provide a fun-filled and creative structure for practicing concepts. The teacher (or a child) provides an ‘input’ and the children answer with an ‘output’. These songs are inclusive and non-threatening as well as being highly adaptable.
3) Memory Songs – these songs are written to help children remember and recall mathematical facts, language and definitions. These work because of the Kylie Minogue Effect – ‘I just can’t get you out of my head’!
Some songs contain the characteristics of more than one type of song!
Can my school buy Number Fun?
Yes. We offer a special school-wide licence discounted from the regular classroom rate. If your SLT need some convincing and you buy a licence for your classroom, we’re more than happy to refund you when the school purchases.
Can I try Number Fun for free?
There is demo material on this page, and from time to time we do offer other trials – but you can get started on a monthly membership for £3.60, and cancel when you like so it’s low risk to check it out.
My school's internet is not great - will Number Fun work for me?
We aim to deliver high quality video so recommend you have 5 Mb/s bandwidth available to stream the songs and animation. This is readily provided by a 4G signal and most broadband connections.
Does Number Fun cover the curriculum?
Number Fun has resources aligned to each of the curriculum domains. There is not a song covering every single objective but there are over 200 covering all the core concepts and the majority of key objectives. More material is being added every month.
Does Number Fun provide lesson plans?
Number Fun is a complementary resource, we’re not looking to take over your lessons. Our aim is to provide fun and engaging content to help children remember the facts and deepen their mathematical skills. The Number Fun songs, games and activities can support you to deliver engaging and innovative lessons.
More from other teachers
The Number Fun portal is so easy to use that even the least tech savvy staff will have no problems navigating around it.
Number Fun = Children loving their maths = Children learning maths.
Rachel – Early Years Lead, York
A fantastic resource to bring mathematics to life – catchy songs to support the learning of the maths curriculum, backed up with a range of resources to support teaching for conceptual understanding !
Alison – Lecturer in Primary Mathematics, Leicester
The new video presentations are incredibly helpful to maximise the impact of these amazing songs. I have been using Number Fun songs with children for over a decade. The new portal has taken the whole product to a new level.
Michael – Headteacher, York |
Results from a new study published in the Journal of Applied Ecology indicate that dim light pollution may have detrimental effects on insect populations and may explain part of the ongoing, large-scale insect declines around the world.
During the study, investigators raised the offspring of moths from urban and rural populations from North- and Mid-European countries and treated them with and without dim light at night. The researchers assessed the induction of diapause, a dormant state that is critical for survival through the winter.
Light treatment affected diapause overall, but more so in Mid- than in North-European populations. In fact, no Mid-European moths entered diapause when exposed to artificial light at night. The impact of light treatment occurred in both urban and rural populations, and there was a lack of urban adaptation in response to light pollution.
“For mitigating the adverse effects of human activities on insects, our results are promising in the sense that this is a factor that can be fairly easily tackled,” said corresponding author Thomas Merckx, PhD, of Vrije Universiteit Brussel, in Belgium. “We show that moths living in both urban and rural settings are sensitive to even dim levels of light pollution. Thus, decreasing light pollution should be a key priority in protecting insects and in safeguarding the ecosystem services they provide us with.”
URL upon publication: https://onlinelibrary.wiley.com/doi/10.1111/1365-2664.14373
NOTE: The information contained in this release is protected by copyright. Please include journal attribution in all coverage. For more information or to obtain a PDF of any study, please contact: Sara Henning-Stout, email@example.com.
About the Journal
The Journal of Applied Ecology publishes novel, broad-reaching and high-impact papers on the interface between ecological science and the management of biological resources. The journal includes all major themes in applied ecology, such as conservation biology, global change, environmental pollution, wildlife and habitat management, land use and management, aquatic resources, restoration ecology, and the management of pests, weeds and disease.
Wiley is one of the world’s largest publishers and a global leader in scientific research and career-connected education. Founded in 1807, Wiley enables discovery, powers education, and shapes workforces. Through its industry-leading content, digital platforms, and knowledge networks, the company delivers on its timeless mission to unlock human potential. Visit us at Wiley.com. Follow us on Facebook, Twitter, LinkedIn and Instagram.
Journal of Applied Ecology
Dim light pollution prevents diapause induction in urban and rural moths
Article Publication Date |
Faced with uncertainty, birds called brood parasites literally put their eggs in more baskets, researchers report.
Brood parasites are birds that are known to lay their eggs in other birds’ nests. Cowbirds and cuckoos are among the most famous examples of this group.
“When brood parasites face increased ecological risks—for example, greater climatic uncertainty in their environment, or greater uncertainty with regards to the availability or behavior of their hosts—they turn to bet-hedging,” says Carlos Botero, assistant professor of biology at Washington University in St. Louis and senior author of a new study in Nature Communications.
“A bet-hedging strategy involves making some or sometimes even many ‘wrong’ choices.”
“In other words, when it is difficult to predict the ideal host, parasites literally lay their few precious eggs in more than one basket,” he says. “This means increasing not just the number of different host species they use, but also expanding the diversity of taxonomic families that they choose as hosts.”
A birder himself, Botero says he is fascinated by things animals do that fall outside the boundaries of what some think of as “typical”—like brood parasitism.
“Parasite mothers can’t really do much about the behaviors that their hosts will display as surrogate parents,” Botero says. “With bet-hedging in the choice of hosts, parasites are at least able to increase the chances that one—or a few—of the surrogate parents they choose will end up behaving in the optimal way.”
Botero and his colleagues observed a pattern they considered striking.
The researchers aggregated environmental, parasite, and host species data associated with 84 species of obligate avian brood parasites from 19 genera and five different bird families. Their list covered approximately 86% of all known parasitic bird species.
For all of these birds, host behavior is critical when it comes to countering environmental threats. Even small differences in the nest architecture, habitat selection, breeding timing, or incubation behavior of the chosen surrogate parents can have life or death implications for young parasitic chicks.
A brood parasite’s properly “hedged” portfolio must include a reasonable diversity of host types to ensure that at least some reproductive success is achieved—no matter what environmental conditions they experience in any given year.
But bet-hedging does come at a cost, the researchers say.
“A bet-hedging strategy involves making some or sometimes even many ‘wrong’ choices,” Botero says. “For example, for years in which the behaviors, timing, and nest type of a given host clearly work better than those of other species, it would be clearly ideal to stick with that option and avoid wasting eggs on others.”
The problem is, parasitic birds that live in variable and unpredictable environments can’t know at the onset which option will work best that year.
“Parasitic mothers that diversify their egg-laying choices may not contribute as many offspring to any given generation as they would have if they had chosen the best host type that year,” Botero says. “But, over time, they will end up contributing a much larger total number of offspring to future generations by fledging some offspring every year.
“It is this long-term vision that allows bet-hedging lineages to prevail and to steer the course of evolution so that in the end, everyone in their species bet-hedges.”
Additional coauthors are from the University of Illinois Urbana-Champaign and Columbia University. |
A surprising part of their evolutionary success is the amazing sense of smell that lets them recognize, communicate and cooperate with one another.
Ants live in complex colonies, sometimes referred to as nests, that are home to a wide range of social interactions. Here, one or more queens are responsible for all the reproduction within that colony. The vast majority of colony members are female workers – sisters that never mate or reproduce and live only to serve the group.
Ants need to defend their colony, seek food and take care of offspring. To accomplish these tasks some ant species domesticate other insects, while others create agricultural systems, harvesting leaves from which they grow edible fungal gardens. Successfully coordinating all these intricate tasks requires reliable and secure communication among nestmates.
We are biologists who study the remarkable sensory abilities of ants. Our recent work shows how their societies depend on the exchange of reliable information which, if disrupted, spells doom for their colonies.
Human communication relies primarily on verbal and visual cues. We usually identify our friends by the sound of their voice, the appearance of their face or the clothes they wear. Ants, however, rely primarily on their acute sense of smell.
An exterior shell, known as an exoskeleton, encases an ant’s body. This greasy coat carries a unique scent that varies from individual to individual and gives each ant a unique odor signature that other ants can detect. This odor signature can communicate important information.
The queen, for example, will smell slightly different from a worker, and thus receive special treatment within the colony. Importantly, ants from different colonies will smell slightly different from one another. The detection and decoding of these differences is vital for colony defense and can trigger aggressive turf wars between colonies when ants catch a whiff of intruders.
For ants and other insects, receiving chemical information begins when an odor enters the small hairs located along their antennae. These hairs are hollow and contain special receptors, called chemosensory neurons, that sort and send the chemical information to the ant’s brain.
Odors, such as those given off from an ant’s greasy coat, act like chemical “keys.” Ants can smell these odor keys only if they are inserted into the correct set of chemosensory neuron “locks.” A neuronal lock remains shut to any odors except its particular key. When the correct key binds to the correct neuronal lock, though, the receptor sends a complex message to the brain. The ant’s brain is able to decode this sensory information to make decisions that ultimately lead to cooperation between nestmates – or battles between non-nestmates.
Changing the Locks
To better understand how ants detect and communicate information, we use laboratory tools such as precisely targeted drugs and genetic engineering to manipulate their sense of smell. We are especially interested in what happens when an ant’s sense of smell goes wrong.
For example, when we prevent an odor “key” from opening a chemosensory “lock,” it prevents the chemical information from reaching the brain. This would be like plugging your nose or standing in a completely dark room – no scents or sights would register. We can also open all the “locks” at the same time, which floods the neurons with too many messages. Both of these scenarios dramatically compromise an ant’s ability to detect and receive accurate information.
When we messed with ants’ sense of smell – whether shutting down or flooding their odor receptors – we found they no longer attacked non-nestmates. Instead, they became less aggressive. In the absence of clear information, ants exercised restraint and opted to accept rather than attack their fellow ant. Put another way, ants ask questions first and shoot later.
We believe this social restraint is hard-wired and gives ants an evolutionary advantage. When you live in a colony with tens of thousands of sisters, a simple case of mistaken identity or miscommunication could lead to deadly infighting and societal chaos, which is potentially very costly.
Not only do they fail to recognize and attack foes, they also stop cooperating with their friends. Without nurses to take care of the young or foragers to collect food, the eggs dry up and the queen goes hungry.
We discovered that without an accurate means of communicating and receiving chemical information, ant societies collapse and the colony quickly dies. Miscommunication or the lack of accurate information affects other highly social animals, including humans, as well. For ants, it all depends on their sense of smell. |
There isn't a habitat or landscape that climate change hasn't already affected. From the boreal forests of Canada to the arid deserts of Arizona, species are adapting to shifts both subtle and substantial. Audubon's new climate report Survival by Degrees: 389 Birds Species on the Brink looks at the species within 12 different habitat types to gauge their level of risk if temperatures continue to climb. The report's findings are a wake-up call: As many as 389 out of 604 North American species face unlivable climate conditions if global temperatures hit 3 degrees Celsius above preindustrial levels.
And yet there is also hope in the report. By acting now, we can potentially help 290 bird species by keeping global temperatures below 1.5 degrees of warming. You can read more about the habitat groups, warming scenarios, and what species in your state are most at risk here, but for a quick preview of the report, here are five birds facing potential range loss—and how you can help them.
Tiny flames of orange amid dark spruce trees, Blackburnian Warblers are among the most colorful birds nesting in the boreal ecosystem. More than two dozen species of warblers breed in this vast northern forest. All move south in fall, but Blackburnians migrate farther than most, spending the winter mainly on the high, forested slopes of the Andes in South America. Some individuals make an annual round trip of more than 9,000 miles. During their spring and autumn journeys, Blackburnian Warblers pop up in woodlands and urban parks, delighting fresh-eyed and seasoned birders alike.
Outlook: Warming is likely to diminish the spruce forests favored by breeding Blackburnians—especially near its southern limits in the northeastern states and southern Canada. The warblers’ wintering range in the Andes is under threat, too, as loggers, miners, and farmers clear trees and contribute to habitat degradation. Because of their epic migration, these birds are also highly dependent on good stopover spots, which are becoming tougher to find as development continues to encroach.
Actions: As the Blackburnian’s breeding range creeps farther north from its wintering grounds, its average migration will become even longer. That makes it even more essential to protect and create stopover habitat, by planting native trees in communities along the birds’ path, for example. Campaigns to reduce building collisions and window strikes will also help warblers and other migrants. Several cities, such as Houston and Toledo, have recently adopted initiatives to turn off lights in tall city buildings on peak flight nights.
Before the first cool days of fall, flocks of Sanderlings are back on the beach, scooting along the sand’s edge as they chase the waves, snapping up tiny prey left behind by the receding water. These small pale sandpipers have come a long way from the high Arctic tundra, where they nest for a short spell in early summer. And many have a long way to go. Their winter range stretches from Alaska and northern Europe to the southernmost beaches of South America, Africa, Australia, and New Zealand.
Outlook: As the planet warms, much of Sanderlings’ coastal habitat is predicted to be engulfed by rising seas. Sanderlings typically gather on sloping sands, and while some beaches may migrate farther inland, that transition will occur more slowly than the birds need. Their Arctic breeding grounds, meanwhile, will experience severe warming, which could cause new plant growth to crowd the bare, open sites the birds prefer and diminish nearby surface water, rendering the land unsuitable.
Actions: Advocating for natural buffers rather than seawalls will make coasts more resilient to storms and sea-level rise, and beach stewards can provide Sanderlings with crucial protection from human disturbance while the birds are at the shore. Wiser marine-wildlife management would also give species a cushion: For example, Sanderlings feed on horseshoe crab eggs during spring migration, so protecting the arthropods may produce benefits up the food chain.
They’re the closest living relatives of the Great Auk, but unlike their extinct cousins, Razorbills can fly. This allows them to reach secure nesting sites on rocky islands around the North Atlantic, from Maine north to Labrador and Greenland, and in Iceland and northwestern Europe. Small flocks gather on the ocean surface over shoals and upwellings, slipping underwater to pursue schools of fish like herring, sandeels, and capelin. Then, like puffins, they wing back to feed their young with multiple fish lined up in their bills. In winter Razorbills disperse and some move south to the Middle Atlantic states, concentrated in waters over the continental shelf.
Outlook: Because their nests are so well hidden, it takes serious effort to census Razorbills. Gains in some areas, such as the Gulf of Maine, have been offset by losses elsewhere. Iceland, which hosts more than half the world’s breeding Razorbills, has seen a sharp decline since about 2005. That drop coincided with a crash in sandeels that was caused, some evidence suggests, by rising sea surface temperatures. So climate-driven changes to the food chain loom as a threat for these auks and many other birds that dwell in cold seas.
Actions: Taking steps to prevent deadly oil spills, including halting offshore drilling and tightening safety rules for tankers and pipelines, would help seabirds such as Razorbills. Because the birds become entangled in fishing nets as they swim and dive, keeping commercial fishing activities away from nesting colonies would also reduce accidental deaths. The Forage Fish Conservation Act, introduced in Congress this past spring, would help ensure they have food to catch by promoting better management of marine prey.
The rich, fluting songs of Wood Thrushes, which John James Audubon described as “the delightful music of this harbinger of day,” were once among the most familiar summer sounds in the eastern United States. These songbirds have become less common in recent decades, but they can still be found along streams and rivers, in deep forest interiors, and in some shady suburbs and large city parks, hunting for insects or feasting on small fruits. In fall the species migrates to grounds in southern Mexico and northern Central America, where solitary birds defend their understory territories.
Given their three-inch, feather-light bodies, Rufous Hummingbirds make an astounding journey, winging from wintering grounds in southern Mexico to nest as far north as Alaska. The tiny aerialists take an elliptical route that dovetails neatly with seasonal change: northwest through blooming deserts in early spring, southeast through lush mountain meadows in late summer. Hundreds of Rufous Hummingbirds, and other western hummingbird species, have also begun wintering in the southeastern United States, especially in lush urban gardens near cities like Baton Rouge.
Outlook: Rufous Hummingbirds seem able to adapt to a variety of habitats, including mature forest, patchy second-growth scrub, and even suburbs and city parks, as long as flowers are available. But the birds’ limits are not fully understood, and a warming climate may reduce their overall breeding range. What’s more, if shifts in temperature and precipitation trigger earlier blooms, they’ll throw off the birds’ precisely timed travels.
Actions: In keeping with their diminutive size, hummingbirds can be supported by habitat decisions on the smallest scale. Even a tiny yard or garden can be turned into a haven, providing these feathered gems with critical resources for survival. The ingredients should include a good mix of native plants that flower in different seasons, trees and shrubs for cover, and supplementary sugar-water feeders, especially in chilly weather. It’s also important to avoid use of pesticides, as hummingbirds eat many small insects as well.
This story originally ran in the Fall 2019 issue as “In Focus.” To receive our print magazine, become a member by making a donation today. |
If a mixture of methane and chlorine is exposed to a flame, it explodes - producing carbon and hydrogen chloride. This is not a very useful reaction! The reaction we are going to explore is a more gentle one between methane and chlorine in the presence of ultraviolet light - typically sunlight. This is a good example of a photochemical reaction - a reaction brought about by light.
\[ CH_4 + Cl_2 \rightarrow CH_3Cl + HCl\]
The organic product is chloromethane. One of the hydrogen atoms in the methane has been replaced by a chlorine atom, so this is a substitution reaction. However, the reaction does not stop there, and all the hydrogens in the methane can in turn be replaced by chlorine atoms. Multiple substitution is dealt with on a separate page, and you will find a link to that at the bottom of this page.
The mechanism involves a chain reaction. During a chain reaction, for every reactive species you start off with, a new one is generated at the end - and this keeps the process going. The over-all process is known as free radical substitution, or as a free radical chain reaction.
- Chain initiation: The chain is initiated (started) by UV light breaking a chlorine molecule into free radicals.
Cl2 \(\rightarrow\) 2Cl
- Chain propagation reactions : These are the reactions which keep the chain going.
CH4 + Cl\(\rightarrow\)CH3 + HCl
CH3 + Cl2\(\rightarrow\)CH3Cl + Cl
- Chain termination reactions: These are reactions which remove free radicals from the system without replacing them by new ones.
CH3 + Cl \(\rightarrow\) CH3C l
CH3 + CH3\(\rightarrow\)CH3CH3
Jim Clark (Chemguide.co.uk) |
Years ago I had an uncle who taught himself calligraphy. He spent many hours practicing various letter forms and styles. He wrote the same sentence repeatedly: The quick brown fox jumps over the lazy dog.
I later learned that this sentence is a pangram. A pangram is a sentence that contains every letter of the alphabet, 26 letters for the English alphabet. Creating pangrams is a great creative activity, and elementary teachers often teach lessons on pangrams. Beyond elementary school, many have tried to develop the shortest pangram that makes sense.
These are some common pangrams:
- The five boxing wizards jump quickly. (31 characters)
- Pack my box with five dozen liquor bottles (32 characters)
- Quick brown foxes jump over the lazy dog. (33 characters)
- Watch “Jeopardy!”, Alex Trebek’s fun TV quiz game. (37 characters)
These are ones I created – or at least I have not seen these elsewhere:
- Tranquil zephyr winds move JX’s flock to the bog. (40 characters)
- Very quietly, zebras chewed noxious packages of jam. (43 characters)
- Five naked girls came quickly with a box of zebra PJs. (43 characters)
Use for Pangrams
I recently started practicing a letter form similar in appearance to architectural lettering. The instructor in the videos I watched recommended to write sentences rather than filling a page with repeats of the same letter or of the alphabet. Every day, like my uncle, I write out the pangrams above a couple of times, once on a worksheet with small squares, and once on a lined template that I developed.
This is a recent practice using Procreate. I made some errors, including writing “bottles” instead of “jugs” on the second pangram:
The great thing about pangrams is that they can be used for any lettering practice. Calligraphy, comic book lettering, architectural lettering, or basic handwriting can be improved through practice. Additionally, pangrams are useful when testing fonts and computer keyboards since each letter is included.
While trying to come up with additional pangrams, I decided to create a worksheet in Google Sheets that calculates the length of a pangram and also indicates which letters of the alphabet have been used.
Using this worksheet I came up with a couple longer pangrams. Counting letters only, both of these are 43 characters in length. Neither of these are particularly memorable.
- Lower my pack before I go quiz the jovial ex-students.
- A zebra and fox jacked up quiet cows, giving lambs hay.
You can check out the worksheet at the link below:
If you need to practice lettering, definitely use pangrams. If you need a good creative exercise, try creating your own. |
Vitamin D is essential in a body for increasing intestinal absorption of calcium, iron, magnesium, phosphate, and zinc. The most important compounds in this group are vitamin D3 (also known as cholecalciferol) and vitamin D2 (ergocalciferol) in the human body.
There are few foods which contain vitamin D. Synthesis of vitamin D (specifical cholecalciferol) in the skin is the main natural source of the vitamin. Vitamin D is made in the skin from cholesterol during a chemical reaction which is dependent on sun rays (specifically UVB radiation).
The importance of Vitamin D in children:
Vitamin D is an important nutrient that works with calcium to help build bones of children and keep them strong. It also prevents several health problems like heart diseases, diabetes, osteoporosis, and thinning of bones.
Our bodies naturally produce vitamin D when we are outside in the sun. Besides getting vitamin D through sunlight, children can get it through certain food. Children get enough vitamin D naturally from being in open-air during daily activities, such as walking, biking, or playing sports. Getting a sufficient amount of vitamin D is important for the normal growth and development of bones, teeth, and resistance against the certain diseases.
Deficiency of Vitamin D
A deficiency in vitamin D in our body with insufficient sun exposure causes osteomalacia (or rickets when it occurs in children), which causes the softening of the bones. This is a rare disease in the world.
Vitamin D deficiency is now a general problem in the old aged as well as common in children and adults. Low blood calcifediol (25-hydroxy-vitamin D) is a disease due to avoiding the sun. Deficiency of Vitamin D also causes in impaired bone mineralization and bone damage which leads to bone-softening diseases including rickets and osteomalacia.
Symptoms of Vitamin D deficiency in children – There are some other symptoms in children
- Bone pain or tenderness
- Dental deformities
- Impaired growth
- Muscle cramp
- Short stature |
We have already discussed examples of position functions in the previous section. We now turn our attention to velocity and acceleration functions in order to understand the role that these quantities play in describing the motion of objects. We will find that position, velocity, and acceleration are all tightly interconnected notions.
Velocity in One Dimension
In one dimension, velocity is almost exactly the same as what we normally call speed. The speed of an object (relative to some fixed reference frame) is a measure of "how fast" the object is going--and coincides precisely with the idea of speed that we normally use in reference to a moving vehicle. Velocity in one-dimension takes into account one additional piece of information that speed, however, does not: the direction of the moving object. Once a coordinate axis has been chosen for a particular problem, the velocityv of an object moving at a speed s will either be v = s, if the object is moving in the positive direction, or v = - s, if the object is moving in the opposite (negative) direction.
More explicitly, the velocity of an object is its change in position per unit time, and is hence usually given in units such as m/s (meters per second) or km/hr (kilometers per hour). The velocity function, v(t), of an object will give the object's velocity at each instant in time--just as the speedometer of a car allows the driver to see how fast he is going. The value of the function v at a particular time t0 is also known as the instantaneous velocity of the object at time t = t0, although the word "instantaneous" here is a bit redundant and is usually used only to emphasize the distinction between the velocity of an object at a particular instant and its "average velocity" over a longer time interval. (Those familiar with elementary calculus will recognize the velocity function as the time derivative of the position function.)
Average Velocity and Instantaneous Velocity
Now that we have a better grasp of what velocity is, we can more precisely define its relationship to position.
We begin by writing down the formula for average velocity. The average velocity of an object with position function x(t) over the time interval (t0, t1) is given by:
As the time intervals get smaller and smaller in the equation for average velocity, we approach the instantaneous velocity of an object. The formula we arrive at for the velocity of an object with position function x(t) at a particular instant of time t is thus:
Take a Study Break
Every Shakespeare Play Summed Up in a Quote from The Office
Every Marvel Movie Summed Up in a Single Sentence
Macbeth As Told in a Series of Texts
QUIZ: Is This a Great Gatsby Quote or a Lorde Lyric?
QUIZ: Which Coming-of-Age Trope Will You Experience This Summer?
QUIZ: Are You a Hero, a Villain, or an Anti-Hero?
Pick 10 Books and We'll Guess Whether You're an Introvert or an Extrovert |
Deuterated compounds – in which a drug molecule’s carbon-hydrogen bond is replaced with a carbon deuterium bond to extend the drug’s half-life – continue to show promise in potentially boosting the bioavailability and safety of some drugs.
The deuterated compound market has attracted many new companies looking to develop and patent deuterated versions of various existing, non-deuterated therapeutic compounds—known as the “Deuterium Switch.”
What is a Deuterated Drug?
A deuterated drug is a small molecule with medicinal activity. It is made by replacing one or more of the hydrogen atoms contained in the drug molecule with deuterium – a hydrogen isotope whose nucleus contains one neutron and one proton. As deuterium and hydrogen have nearly the same physical properties, deuterium substitution is the smallest structural change that can be made to a molecule.
To Deuterate or Not to Deuterate – That’s the Regulatory Question
Developers of deuterium switch compounds must show significant clinical benefits over existing non-deuterated versions to justify why they should replace existing or less expensive therapies. However, such a switch can:
- Take advantage of the clinical knowledge concerning the non-deuterated version of the compound
- Benefit from new patent protections
- Result in improved therapies and patient outcomes.
Did you know that most large pharmaceutical companies today also claim deuterated versions of new molecules in their patent applications?
Benefits of Deuterated Versions of Drugs
Deuterated versions of existing drugs can benefit from improved pharmacokinetic or toxicological properties. Because of the kinetic isotope effect, which is the change in rate of a chemical reaction when one of the atoms in the reactants is substituted with one of the isotopes, drugs that contain deuterium may have significantly lower metabolism rates. As the C-D bond is ten times stronger than the C-H bond, it is much more resistant to chemical or enzymatic cleavage and the difficulty of breaking the bond can decrease the rate of metabolism. Lower metabolism rates give deuterated drugs a longer half-life, making them take much longer to be eliminated from the body. This reduced metabolism can extend a drug’s desired effects, diminish its undesirable effects, and allow less frequent dosing. The replacement may also lower toxicity by reducing toxic metabolite formation.
A major potential advantage of deuterated compounds is the possibility of faster, more efficient, less costly clinical trials, because of the extensive testing the non-deuterated versions have previously undergone. The main reasons compounds fail during clinical trials are lack of efficacy, poor pharmacokinetics or toxicity. With deuterated drugs, efficacy is not in question – allowing the research to focus on pharmacokinetics and toxicity. Deuterated versions of drugs might also be able to obtain FDA approval via a 505(b)(2) NDA filing, a faster, less expensive route.
Manufacturing Deuterium Exchanged APIs
With our expertise in deuteration technology, Neuland Labs uses a synthetic approach where deuterium-enriched material is combined with the drug to produce deuterated drugs. Another approach, called an exchange approach, uses a catalyst to produce a deuterated molecule.
The most popular process for sourcing deuterium for drugs is extracting D2O from regular water via the Girdler sulfide (also known as the Geib-Spevack) process, which uses a temperature difference and hydrogen sulfide to enrich deuterium in water by up to 20%.
Deuterated Molecules Advance in Clinical Trials
While Deuteration has been around literally for decades, I mentioned in an article last year at PharmTech (Pharma APIs: It’s Still a Small World) that most deuterium chemistry efforts are currently in the pre-formulation stage.
Those deuterated compounds that have advanced are generally performing well in clinical trials. In July, a deuterated drug reached Phase III testing for the first time, in a study to treat Huntington’s disease. Known as deutetrabenazine, the drug was found to reduce the disease symptoms and the frequency of administration, and it is currently being considered for approval by the FDA.
Recently, another investigational new drug targeting nonalcoholic steatohepatitis & adrenomyeloneuropathy (a deuterium-stabilized [R]-enantiomer of pioglitazone) recently completed FDA review and appears headed towards a Phase 1 study.
Growing Opportunity for Deuterated Drugs
The current market value of companies specializing in this technology suggests that the value of “deuterium switching” could be more than a $1 billion, and that the greatest discoveries in the field have yet to occur. |
3 methods to deal with outliers
In both statistics and machine learning, outlier detection is important for building an accurate model to get good results. Here three methods are discussed to detect outliers or anomalous data instances.
An outlier is a data point that is distant from other similar points. They may be due to variability in the measurement or may indicate experimental errors. If possible, outliers should be excluded from the data set. However, detecting that anomalous instances might be very difficult, and is not always possible.
Machine learning algorithms are very sensitive to the range and distribution of attribute values. Data outliers can spoil and mislead the training process resulting in longer training times, less accurate models and ultimately poorer results.
Along this article, we are going to talk about 3 different methods of dealing with outliers:
- Univariate method: This method looks for data points with extreme values on one variable.
- Multivariate method: Here we look for unusual combinations on all the variables.
- Minkowski error: This method reduces the contribution of potential outliers in the training process.
To illustrate that methods, we will use a data set obtained from the following function.
y = sin(π·x)
Once we have our data set, we replace two y values for other ones that are far from our function. The next graph depicts this data set.
The points A=(-0.5,-1.5) and B=(0.5,0.5) are outliers. Point A is outside the range defined by the y data, while Point B is inside that range. As we will see, that makes them of different nature, and we will need different methods to detect and treat them.
1. Univariate method
One of the simplest methods for detecting outliers is the use of box plots. A box plot is a graphical display for describing the distribution of the data. Box plots use the median and the lower and upper quartiles.
The Tukey’s method defines an outlier as those values of the data set that fall far from the central point, the median. The maximum distance to the center of the data that is going to be allowed is called the cleaning parameter. Id the cleaning parameter is very large, the test becomes less sensitive to outliers. On the contrary, if it is too small, a lot of values will be detected as outliers.
The following chart shows the box plot for the variable y. The minimum of the variable is -1.5, the first quartile is -0.707, the second quartile or median is 0, the third quartile is 0.588 and the maximum is 0.988.
As we can see, the minimum is far away from the first quartile and the median. If we set the cleaning parameter to 0.6, the Tukey’s method will detect Point A as an outlier, and clean it from the data set.
Plotting again the box plot for that variable, we can notice that the outlier has been removed. As a consequence, the distribution of the data is now much better. Now, the minimum of y is -0.9858, the first quartile is -0.588, the second quartile or median is 0.078, the third quartile is 0.707 and the maximum is 0.988.
However, this univariate method has not detected Point B, and therefore we are not finished.
2. Multivariate method
Outliers do not need to be extreme values. Therefore, as we have seen with Point B, the univariate method does not always work well. The multivariate method tries to solve that by building a model using all the data available, and then cleaning those instances with errors above a given value.
In this case, we have trained a neural network using all the available data (but Point B, which was excluded by the univariate method). Once we have our predictive model, we perform a linear regression analysis in order to obtain the next graph. The predicted values are plotted versus the actual ones as squares. The coloured line indicates the best linear fit. The grey line would indicate a perfect fit.
As we can see, there is a point that falls too far from the model. This point is spoiling the model, so we can think that it is another outlier.
To find that point quantitatively, we can calculate the maximum errors between the outputs from the model and the targets in the data. The following table lists the 5 instances with maximum errors.
We can notice that instance 11 stands out for having a large error in comparison with the others (0.430 versus 0.069,…). If we look at the linear regression graph, we can see that this instance matches the point that is far away from the model.
If we select 20% of maximum error, this method identifies Point B as an outlier and cleans it from the data set. We can see that by performing again a linear regression analysis.
There are no more outliers in our data set so the generalization capabilities of our model will improve notably.
3. Minkowski error
Now, we are going to talk about a different method for dealing with outliers. Unlike the univariate and multivariate methods, it doesn’t detect and clean the outliers. Instead, it reduces the impact that outliers will have in the model.
The Minkowski error is a loss index that is more insensitive to outliers than the standard sum squared error. The sum squared error raises each instance error to the square, making a too big contribution of outliers to the total error. The Minkowski error solves that by raising each instance error to a number smaller than 2, for instance 1.5. This reduces the contribution of outliers to the total error. For instance, if an outlier has an error of 10, the squared error for that instance will be 100, while the Minkowski error will be 31.62.
To illustrate this method, we are going to build two different neural network models from our data set contaning two outliers (A and B). The architecture selected for this network is 1:24:1. The first one will be created with the sum squared error, and the second one with the Minkowski error.
The model trained with sum squared error is plotted in the next figure. As we can see, two outliers are spoiling the model.
Now, we are going to train the same neural network with the Minkowski error. The resulting model is depicted next. As we can see, the Minkowski error has made the training process more insensitive to outliers than the sum squared error.
As a result, Minkowski error has improved the quality of our model notably.
We have seen that outliers are one of the main problems when building a predictive model. Indeed, they cause data scientists to achieve poorer results than they could. To solve that, we need effective methods deal with that spurious points and remove them.
In this article, we have seen 3 different methods for dealing with outliers: the univariate method, the multivariate method and the Minkowski error. These methods are complementary and, if our data set has many and difficult outliers, we might need to try them all.
Original. Reposted with permission.
Top Stories Past 30 Days |
The temperature of an object or fluid is that property which determines the direction of the flow of heat from that body or fluid to an adjacent body or fluid with which it is in contact. Thus, heat flows from a body or fluid of higher temperature to a body or fluid of lower temperature. Temperature is one of the main parameters of state which defines the thermal state of the system. The temperature of all parts of the system in thermodynamic equilibrium is the same. Based on the molecular-kinetic approach, the temperature of a system characterizes the intensity of thermal motion of atoms, molecules and other particles forming the system.
For instance, for a system described by the laws of classical statistical physics the mean kinetic energy of thermal motion of particles is directly proportional to the absolute temperature of the system. In this regard we can say that the temperature characterizes the thermal motions within a body.
In thermodynamics the reciprocal of the derivative of the entropy S of a body with respect to its energy E is called the absolute temperature T:
Temperature, like entropy, is a purely statistical quantity and makes sense only for macroscopic bodies. According to the second law of thermodynamics, energy is transferred from bodies with higher temperature to bodies with lower temperature. The absolute temperature is always positive, T > 0. The least absolute temperature possible is the absolute zero. At absolute zero, the translatory and rotary motion of atoms and molecules comes to an end, and they are in a state of the so-called "zero vibrations" rather than in a state of rest. By Nernst theorem the entropy of any body becomes zero at absolute zero temperature. Absolute zero is unattainable. The entropy S is a dimensionless quantity and from Eq. (1) it follows that temperature has the dimensions of energy and can be measured in Joules. The ratio Joules/Kelvins (K) called Boltzmann’s constant k is equal to k = 1.38 × 10−23 Joules/K.
Actually, the temperature is usually measured in arbitrary units, degrees (Celsius degree, °C, Fahrenheit, °F, Réaumur, °R) or in "Kelvins" whose value is determined by the corresponding temperature scales.
Temperature scales are systems of sequentially numbered values corresponding to various temperatures. The temperature can be determined by measuring any quantity dependent on it and it is convenient to measure a physical property of a certain, so-called thermometric substance (for instance, the volume or pressure of a gas, the resistance of a conductor). To realize a temperature scale, we must select its origin and the dimension of the temperature unit (degree). For this purpose, we usually use two reference points — temperatures of transition of a substance from one agregate state to another. Such temperature scales are called "practical." The first practical temperature scale was suggested by Fahrenheit in 1724, in which one of the reference points was the temperature of a human body, accepted by Fahrenheit to be equal to 96 degrees (°F), the second, the temperature of ice melting, equal to 32 degrees (°F). A liquid mercury thermometer served as an interpolation device.
More accurate practical temperature scales were suggested by Celsius and Réaumur. In these scales the temperatures of melting of ice and boiling of water at atmosphere pressure were used as reference points. The temperature interval between these points in Celsius’ scale (°C) was divided by 100, and in Réaumur’s scale (°R) by 80 equal parts. In Fahrenheit's temperature scale, this temperature interval is equal to 180 (°F). In the absolute Kelvin (K) and Rankine (R) temperature scales, where the origin of scale is the absolute zero, the temperature interval is equal to 100 and 180 temperature units, respectively. The principle of constructing temperature scales suggested by Fahrenheit (the reference points and interpolation device) is used in the international temperature scales. For instance, the international temperature scale of 1927, ITS-27, was realized using two points (0°C and 100°C), the unit of temperature is the degree Celsius (°C); in the ITS-90, one point, the temperature of the triple point of water, 273.16 K, was used; the unit of temperature is the Kelvin (K). The main interpolation device is a platinum resistance thermometer.
The so-called thermodynamic temperature scale, which is independent of the particular properties of a thermometric substance, can be realized on the basis of the second law of thermodynamics, by determining the ratio of temperatures from the ratio of temperatures in Carnot’s cycle. In practice, to construct such a scale, relations are used which, whilst not contradicting the second law of thermodynamics, relate the thermodynamic temperature to some additive physical quantity which can be measured accurately enough. The most widespread are:
the gas thermometer based on the gas law(2)
where R is the universal gas constant, and p, , T are the pressure, molar volume and temperature of a working substance in an ideal gas state;
the acoustic thermometer based on measuring the velocity of sound, C, in a gas(3)
where γ = cp/cv is the specific heat ratio (for an ideal gas γ = const), and is the molecular weight of the working substance;
the radiation thermometer based on measuring the total energy of heat radiation E(T) emitted by a blackbody at temperature T(4)
where σ is the Stefan-Boltzmann constant;
the thermal noise thermometer based on measuring the root- mean-square of voltage noise in current flow through a resistance Ω (ohm) at a temperature T (Nyquist’s equation)(5)
where Δf is the band width (Hz).
Presently the most exact values of thermodynamic temperature in a wide range of values can be obtained using the gas thermometer; however, near 4 K and above 200 K the noise and radiation thermometers approach the gas thermometer in accuracy. In 1848, J. Thomson (Lord Kelvin) proved that the temperatures determined from Carnot’s cycle and by the gas thermometer are identical and represent the thermodynamic or absolute temperature. With the assumption for a dimension of a temperature unit made by Celsius, Thomson determined the value of temperature of ice melting on a new scale, −273.15 K.
Since 1960, the unit of thermodynamic temperature (K) was determined as 1/273.16 of the temperature of the triple point of water, −273.16 K. When using a gas thermometer of constant volume ( = const) relation (2) for determining the unknown temperature Tx takes the form
where T0 = 273.16 K, p0 is the pressure of the working substance at a temperature Tx. In the ITS-90 the basic unit of temperature T90 is the Kelvin (K). The measurements allowed for measuring temperature in °C (t90) are defined as
Quinn, T. I. (1983) Temperature, London, Academic Press.
- Quinn, T. I. (1983) Temperature, London, Academic Press. |
Upgrade to remove ads
Hydrogen Ions and Acidity
Terms in this set (18)
a term describing the reaction in which two water molecules react to produce ions
an aqueous solution in which the concentrations of hydrogen and hydroxide ions are equal; it has pH of 7.0
What indicates that the self-ionization equation is an equilibrium reaction?
The double arrow
equilibrium constant expression
a ratio of the concentrations of products to the concentration of reactants of a reaction
What is the product of K_eq and the concentration of water?
The ion product constant for water, K_w
How is the unknown concentration solved?
By substituting the given concentration into the equation and solving.
10^(-x)/10^(-y) = 10^[-x(-y)] = 10 ^(-x+y)
What is the product of the hydrogen-ion concentration and the hydroxide-ion concentration?
1.0 * 10^(-14)
ion-product constant for water (K_w)
the product of the concentrations of hydrogen ions and hydroxide ions in water; 1 * 10^(-14) at 25C
any solution in which the hydrogen-ion concentration is greater than the hydroxide-ion concentration; the H+ of an scidic solution is greater 1 * 10^(-7)M
any solution in which the hydroxide-ion concentration is greater than the hydrogen-ion concentration; the H+ of a basic solution is less than 1 * 10^(-7)
a number used to denote the hydrogen-ion concentration, or acidity, of a solution; it is the negative logarithm of the hydrogen-ion concentration of a solution
What is the pH equation?
When is a solution acidic? Basic?
When the [H+] is greater than 1
10^(-7)M and has a pH of less than 7, it is acidic. When the [H+] is lower than 1
10^(-7) and has a pH of greater than 7, it is basic.
How do you calculate the pOH of a solution?
pOH = -log[OH]
When is a solution basic and acidic in pOH?
Less than 7, basic. Greater than 7, acidic.
How do you find the pOH when the pH is known?
pH + pOH = 14
How do you fine the precise [H+] of a solution?
the 10^x button.
What is an indicator?
An indicator is a valuable tool for measuring pH because its acid from and base form have different colors in solution
This set is often in folders with...
Calculations Involving Colligative Properties
Chemistry Chapters 13, 14, and 15
Strengths of Acids and Bases
You might also like...
Chemistry 19.2: Hydrogen Ions and Acidity
pH and pOH
hydrogen ions and acidity 19.2
Other sets by this creator
AP Psych: Social Psychology
Other Quizlet sets
cardiovascular emergencies chapter 16
Leistungstests und Mehrfachmessung
ECON200 Practice Final
Anterior and Lateral Leg Muscles |
Graphic Communication in all its forms is vital to society. It is a means of getting across information visually using graphics. Graphic communication comes in many forms and various aspects of life including education, industry and commerce.
This course is designed to increase your awareness of how graphics are used, and to learn about the technology used to create them. You will create 2D, 3D and pictorial graphics with visual impact or that transmits information, digitally and on paper.
The skills you learn in this course are useful in many career areas including Architecture, Surveying, Engineering or Design and Marketing.
To see what career areas this subject could lead to and the routes to get there, download and view these career pathways:
Entry is at the discretion of the school or college, but you would normally be expected to have achieved:
The course consists of two areas of study.
2D graphic communication
3D and pictorial graphic communication
The course assessment has two components totalling 140 marks:
For the assignment component, you will be asked to produce a piece of graphical work in response to a brief.
Both the question paper and the assignment are set and externally marked by the Scottish Qualifications Authority (SQA).
The grade awarded is based on the total marks achieved across course assessment.
The course assessment is graded A-D.
If you complete the course successfully, it may lead to:
Further study, training or employment in:
Your school will give your parents an Options or Choices information booklet, which has detailed information on the curriculum and the individual subjects or courses you can study. It will also invite them along to a Parents’ Information Evening.
They can also read the information leaflet(s): |
Mosquito season is in full swing this time of year. Where on people’s properties are mosquitos likely to breed? What can people do to reduce the amount of mosquitos?
Along with a number of interrelated issues, effective pest management programs must begin with a site inspection. The inspection is focused on two things; (1) accurate identification of the pest(s) and (2) identifying pest resources or how the environment is contributing to the pests’ survival?
With accurate identification of the pest, the behavior and biology of the pest should be apparent. For instance; what is the pests’ preferred food source? Where does the pest typically deposit eggs? What is the life cycle of the pest? Armed with this knowledge, the pest management professional begins to strategize a management protocol.
In terms of mosquitoes, the female typically needs a blood meal to produce viable eggs. The female mosquito deposits eggs in, on or near water. Part of the mosquitoes’ early life stages requires an aquatic component.
There are more than 2,500 different species of mosquitoes in the world, 150 of which occur in the U.S. and only small fraction of which actually transmit disease. Mosquitoes go through four stages in their life cycle – egg, larva, pupa, and adult. Eggs can be laid either one at a time or in rafts and float on the surface of the water.
Culex and Culiseta species stick their eggs together in rafts of 200 or more, which looks like a speck of soot floating on the water, about 1/4-inch long and 1/8-inch wide. Anopheles and Aedes species do not make rafts, but lay their eggs separately. Aedes lay their eggs on damp soil that will be flooded by water. Most eggs hatch into larvae within 48 hours.
Larvae live in the water and come to the surface to breathe. They feed on microorganisms and organic matter in the water. They molt four times, growing larger after each molting, and changing into pupae after the fourth molt when they are about 1/2-inch long. The pupal stage is a resting, non-feeding stage. This is when the mosquito turns into an adult. It takes about two days for the adult to fully develop, split the pupal skin and emerge. Adults rest on the surface of the water to allow their body parts to harden and wings to dry.
The complete life cycle can take as little as four days or as long as one month, depending on the temperature. Only adult female mosquitoes bite animals and require blood meals; males feed on the nectar of flowers.
So what then do mosquitoes need? Why are they finding your backyard so darn attractive? They need suitable aquatic breeding habitats in order to complete their life cycle i.e.; they need water.
Your first step in managing mosquitoes should be to remove any and all potential breeding areas – anyplace that water collects – from your yard. This will provide long-term control over mosquito populations and also controls populations before they mature and have a chance to reproduce, transfer disease, and annoy.
Mechanical (non-chemical) actions you can take to reduce mosquito populations in your yard and home.
- Maintain window screens and doors, closing all opened doors.
- Remove or regularly drain all water-retaining objects, such as tin cans, pet dishes, and buckets, holes in trees, clogged gutters and down spouts, old tires, birdbaths, trash can lids, and shallow fishless ponds.
- Stock permanent water pools, such as ornamental ponds, with mosquito larvae eating fish.
- Check for standing water in plastic or canvas tarps used to cover pool and boats.
- Arrange tarps to drain water and turn canoes and small boats upside down for storage.
- Fix dripping outside water faucets.
- Enhance the drainage of flood canals, irrigation ditches and fields; keep street gutters and catch basins free of debris and flowing properly; and enhance drainage or create permanent deep pools in marshes.
- Remove or treat sewage leaks and lagoons, which provide excellent breeding conditions for certain species.
When applied by properly trained, certified pest management professionals, these materials can further mitigate mosquito populations.
Conventional pesticides and organic / green pesticides are effective to varying degrees. For individuals who prefer conventional materials, it is recommended that a professional applicator be retained for the service. The same can be said for using green materials. A professional applicator has the knowledge and experience needed to properly provide the treatment(s).
Clearly, whether you use a conventional pesticide or an organic / green material, you MUST also use the mechanical protocols. The combination of non-chemical and chemical management techniques, typically result in a better outcome overall.
Michael Deutsch MS, BCE
Arrow Exterminating Company, Inc. |
The deepest cave in the world found up to 1992 is in France, the Reseau Jean Bernard.
Its total depth, not just the length of its shaft, is 5,256 feet, almost a mile straight down.
The longest system of underground passageways in the world is the Mammoth Cave system in Kentucky.
The numbers change with new exploration, but as of June 1991, the sum of all known passages was 340 miles.
The deepest cave known in the United States is a new discovery, Lechuguilla Cave, in Carlsbad Caverns National Park, New Mexico.
Its depth is 1,565 feet.
A large part of the United States is underlain by cavernous limestone aquifers.
The Mammoth Cave National Park system was basically formed by the solution of limestone. As water falls on the surface of the earth, it mixes with organic matter and forms a weak solution of carbonic acid, like weak Coca-Cola.
Given sufficient time, perhaps millions of years, the solution feeds into the cavities of the limestone and gradually creates a cave, a little like tooth decay.
Amateur exploration of caves is an important source of knowledge about them and can be safe if undertaken by properly trained and supervised people.
For example, from the standpoint of structural stability, caves are relatively much safer than mines. |
A herbarium is a collection of preserved plants stored, catalogued and arranged systematically for study by both professional taxonomists (scientists who name plants), botanists and amateurs.
The practice of pressing plants between sheets of paper and drying them has been used for around 500 years old. Thanks to this simple technique, most of the characteristics of living plants are visible on the dried plant. The few that are not (e.g. flower colour, scent, height of a tree, vegetation type, etc.) are put on the label by the collector.
A working reference collection
The specimens that are stored in the herbarium are a working reference collection used in the identification of plants, the writing of Floras (a description of all the plants in a country or region), monographs (a description of plants within a plant group, such as a family) and the study of plant evolutionary relationships. The most important specimens are called 'types'. This is a specimen chosen by the author of a new species as a reference point for a particular species.
A herbarium is like a library or vast catalogue and each plant specimen has its own unique information - where it was found, when it flowers and what it looks like. They can also be used to provide samples of DNA for research, as DNA remains intact for many years. It is usually used for evolutionary studies and is routinely extracted from herbarium specimens.
Preserved and mounted
Individual plants or parts of plants are preserved in various ways. The most common method for preserving plants in a herbarium is to press and dry them, and then mount them onto stiff card using glue, tape and stitching.
For some groups, such as lichens, fungi, mosses and some algae, the specimens are placed loose in paper envelopes (capsules) which are then attached to the stiff sheets of card.
More delicate or fleshy material is often placed jars of alcolhol which maintains their structure.
Large specimens of fruit, seed and wood are held separately, often placed in protective boxes to prevent damage. An additional collection of microscope slides contains large numbers of diatom collections.
More recently, a collection of silica-dried material has been incorporated into the herbarium. These specimens are used primarily for DNA extraction, enabling researchers to study the molecular properties of species.
Specimens are continually being exchanged and sent out on loan to other herbaria. |
Interrupts seem to have some confusion and ignorance associated with them. I hope to remove some of this and replace it with understanding so one can use interrupts with confidence.
What is an interrupt?
Interrupts were devised as a means of signalling the processor that a hardware device wanted attention. Without interrupts, the software would have to regularly ask the device(s) if they were ready and then service them if they were. Between this 'polling', you could do some real work. The problem was some devices need fast attention, so if you didn't poll the device fast enough, data would be lost. Enter the 'interrupt'. An interrupt is effectively a subroutine call activated by a hardware signal. The processor at its simplest fetches an instruction then executes it. Repeat ad-infinitum. With interrupts, the processor checks to see if the interrupt signal is active. If it is active, it will execute then means needed to call the interrupt service routine (ISR). Otherwise, the processor will test for interrupt, fetch the instruction, execute it, repeat.....
With the AVR, we have many sources of interrupts - timer, external, uart etc. Each source has a 'vector' associated with it. This is simply the address of the ISR to be called when the requisite interrupt is activated. We also have enable flags and interrupt active flags for each of the interrupt sources. This means we can enable/disable each interrupt source as we see fit. The AVR also has a global interrupt enable flag that is set/reset with the SEI (enable interrupts) and CLI (disable interrupts) instructions.
Multiple Interrupt at one time
Since we have multiple interrupt sources that we can enable, what happens if a number of them are active at one time? Priority. The AVR has a fixed priority scheme to sort out what source it will service. Lower priority sources will have to wait their turn. Many of the devices have an interrupt active flag associated with them. In the case of the timers, each source of interrupt has a flag that gets set when a timer event occurs. This flag stays set until the AVR actually calls the ISR for that interrupt source. so with this, an event won't get missed but the response may be delayed. This is a good reason to make all your ISRs lean and mean. Get the job done and get out - no sitting in loops or wasting time.
In a nutshell, the AVR will scan for all active interrupt sources, the highest priority wins, disable all further interrupts by clearing the I bit in the status register then it will get the vector (address of the ISR) for that source then call that ISR. Simple. When you do a RETI instruction (in 'c' the compiler does this for you at the end of an ISR), the I bit is set so other interrupts can be serviced.
The down side of interrupts
So we've had an interrupt and the AVR has called our ISR, what next. Because we've interrupted other code and the ISR may change registers etc, we need to save the SREG (status register) and any other registers we modify. This is called 'preserving the processor state'. Once we've done this, we can alter the registers as we please. Once our ISR has completed, we need to restore the saved registers and do a RETI (return from interrupt instruction). In 'C' the compiler does this for us.
So far so good.
For most programs, we need to share variables between the ISR and the rest of our code. This is where problems can arise. We need to share our variables carefully, otherwise an ISR might occur at any time and change something when we're not looking.
Certain operations need to be done without interruption - 'atomically' which means 'as one' or 'indivisable'. This needs to happen when:
1. Reading/writing variables that are larger than one byte. The AVR being an 8 bit cpu, reads/writes in 8bit chunks. so to read/write a int variable (in C), we need to r/w in two chunks, the low byte and the high byte. This takes a few processor instructions to achieve this. What happens if an interrupt occurs between these two operations and the ISR modifies the variable we're accessing? We get half the unmodified variable and the other half modified. Of course this occurs randomly and most likely very infrequently as both interrupt and the variable access have to happen at the right time. This causes the worst kinds of bugs - so very hard to track down. So how do we get around this problem? Disable interrupts, if reading a shared variable copy the shared variable into another variable, if writing, perform the write, then re-enable interrupts. If any interrupts occur whilst we have them disabled will be processed after we reenable them. In 'c' the code might look like this:
volatile unsigned int shared_var;
in main code:
copy_of_shared_var = shared_var;
// use copy of shared var
Note that we have declared the variable 'volatile'. With optimising compilers, they try to keep used variables inside the AVR registers for performance. The problem is if an interrupt comes along and modifies the variable that is stored in ram, next time the main code reads the variable (that the compiler has stashed in the registers), we're not reading the current value of that variable. Declaring a variable 'volatile' tells the compiler not to stash a copy away in the registers but to read/write the variable to/from memory each time it is accessed.
Note that the above applies also to arrays and collections of data that you expect to be cohesive.
2. When testing and modifying a variable. A common operation that needs to be protected is a 'test and set'. If the value is 0, we set the variable to a non-zero value. Here we have two operations that might get interrupted and cause and unwanted condition. Again, we must temporarily disable interrupts, do the test and set and then re-enable interrupts.
So, we need to remember that interrupts can happen at any time and ISRs can modify shared variables whilst the main code is performing operations on them. These are called 'critical sections'. At the assembler level, it is obvious to the programmer that there are a number of instructions involved in a particular operation, whereas in'c' and other higher level languages it is not so obvious that one line of code may generate a number of assembler instructions.
The basic rules are:
1/ Understand the pitfalls of shared variables
2/ minimise the use of shared variables
3/ Implement 'critical sections' where necessary
This article highlights what can go wrong when ignoring the above:
The term RTOS seems to have some magic associated with it. 'Real time Operating System' is what it stands for. so what is 'real time'? Historically it means that the operating system will activate a task within a given time. The actual time varies, but is usually taken to be 'as fast as possible' - so a faster cpu will respond faster in most instances. The 'operating system' in many cases (especially with AVR) is simply a task switcher not something like Windows(c).
There are two major type of task switching:
Co-operative is the simplest - each of your tasks must perform its duty then return. Then the next task runs etc. As such there is only one 'thread' of execution.
Each task must 'co-operate' otherwise other tasks won't get a chance of running.
Pre-emptive is a little more complex. Pre-emptive means that a running task may be interrupted and suspended at any point in its execution. This is done with interrupts. The operating system keeps a stack for each of the tasks and when a task swap is needed, it swaps to the required stack and restores the previous processor state. The main downside of this is due to the number of stacks, this consumes an amount of memory which may be in short supply in a small system like the AVR.
Usually a timer is set up to give a regular interrupt that calls the operating system scheduler. The scheduler code determines what task should run next. Interrupts from other sources may call the scheduler. The design of the scheduler determines if the system is 'real time' or not. Windows and Linux are examples of pre-emptive operating systems but they are not real-time as certain tasks may continue to run until they relinguish control back to the scheduler.
There may be some conjecture as to my description of real time as the term has been misused so much that its meaning has been lost in the mire. I base my description on the historical use of the term.
With a pre-emptive operating system real time or not we again have the problem of sharing variables. Many operating systems give you system calls to take care of this - flags, queues etc. Use there where possible as the operating system takes care of any sharing issues (well at least it should!). Sometimes we still need to share variables between tasks so the issue with atomicty is the same - use critical sections to ensure your operations are atomic. With a co-operative system, variable sharing isn't a problem (except with ISRs) as each task runs to completion.
There's usually a question that pops up on the forums a lot on how to create a 'software interrupt'. Some processors have this facility built it but the AVRs don't. There's a variety of ways of making this happen - set up a port pin for external interrupts and toggle the port pin in software is a popular one. What the person is wanting though is a task switcher. In this instance, the person should evaluate the design of the code to eliminate this requirement or go to a pre-emptive o/s like freertos to provide a more general solution to the problem. Personally, I avoid using a pre-emptive task switcher on AVR class projects as most applications don't need it. Clever use of a timer interrupt and careful design can yield a solution using a co-operative method. Using a co-operative method avoids a lot of potential pitfalls associated with pre-emptive strategies. Call it 'problem avoidance'. By avoiding methods that might introduce tricky problems in debugging means you have a better chance of writing defect free code - which should be a 'good thing'.
Switches and Interrupts
It seems like a perfect marriage, switches activating a pin change or external interrupt - but there are dangers lurking. These are:
1/ Switch bounce - mechanical switches 'bounce' as in giving multiple on/off actions. This usually happens in the range of 5 - 50mS. So hooked up to an interrupt, each press of the switch can give you a random amount of interrupts for each press.
2/ Picking up EMI. Say we had a switch connected by 10M of wire to our AVR with an interrupt. Without protection the wire may pick up unwanted radiation from a mobile phone or other unit. This would give us a burst of interrupts in fast succession - so fast that our poor AVR might be hard pressed to keep up. The net result is our application on the AVR doesn't work as we would like. Even without a long length of wire, we still have the potential to pick up unwanted interference.
Put simply ,we have little control of the interrupts which may introduce some unreliability into our unit. So how do we avoid this?
Use a timer interrupt to read the switch input(s) and apply a debounce algorithm. Even if the input is not a mechanical switch, it always pays to apply some form of filtering to remove transient spikes etc from affecting the rest of our code.
If you want the switch to power up the AVR, use an interrupt here, but disable it once you've powered up then use a timer interrupt to read the switch.
Where possible, minimise or eliminate the use of external interrupts. I'm not saying 'never do it', but if you have to do it, be aware of the potential problems and take steps to minimise them.
If you've got this far hopefully you've been able to understand my waffle. If something is not too clear or you need clarification, drop a message and I'm try to correct/explain it. |
Uganda boasts a number of wetlands that have been listed as Wetlands of International Importance under the Ramsar convention. All these sites are recognized by BirdLife International as Important Bird Areas as well as providing a vital habitat for other threatened plants and animals.
Two of these wetlands are found within Uganda’s national parks:
Lake Mburo-Nakivali Wetland System, Lake Mburo National Park
This unique habitat lies at the convergence of two biological zones, giving it very high biodiversity. It supports globally threatened species of birds such as the Papyrus Yellow Warbler and Shoebill, and two of the endangered cichlid fish species which have become extinct in the main lakes. It is the only area in Uganda in which the impala is found.
Murchison Falls-Albert Delta Wetland System, Murchison Falls National Park
The site stretches from the top of Murchison Falls to the delta at its confluence with Lake Albert. The delta forms a shallow area that is important for water birds, especially the shoebill, pelicans, darters and various heron species. It is also an important spawning and breeding ground for Lake Albert fisheries, containing indigenous fish species, and it forms a feeding and watering refuge for wildlife during dry seasons.
Other Ramsar sites:
- Lake Bisina Wetland System
- Lake Nakuwa Wetland System
- Lake Opeta Wetland System
Lake Victoria Region:
- Sango Bay-Musambwa Island-Kagera Wetland System (SAMUKA)
- Nabajjuzi Wetland System
- Lutembe Bay Wetland System
- Mabamba Bay Wetland System
- Font Size
- Reading Mode |
Baby puffins starving off the coast of Maine because herring have moved to colder waters, sea turtles in Brazil hatching mostly females because of warmer sands — there’s no question that climate change threatens the continued existence of thousands of species, including that forlorn-looking polar bear on his shrinking iceberg.
But there’s now also mounting evidence that certain animal species actually exert an enormous amount of control over the seemingly cruel and temperamental climate that determines their fate.
Off the coast of the Aleutian Islands, for example, researchers at the University of Santa Cruz have found that sea otters essentially control one of the ocean’s largest carbon sinks — kelp forests. That’s because sea otters are the main predators of sea urchins, which, if released from the pressure of predation, quickly multiply and decimate kelp forests down to bare ocean floor. While a lot of sea otters are best for keeping urchins at bay, even a few sea otters can help limit the damage to the kelp by forcing urchins to spend most of their time hiding amongst the rocks instead of brazenly marching across the kelp beds eating everything in sight.
Kelp beds protected by otters absorb 12 times more carbon dioxide than those thinned out by urchins, according to the research.
Around the world, ecosystems which keep carbon dioxide locked up and out of the atmosphere are getting more and more attention. Until recently, Ecuador was successfully raising funds from other nations to keep Ecuadorian forests intact, and the massive amounts of carbon they hold, in the ground. Likewise, the United Nation’s REDD program — Reducing Emissions from Deforestation and Forest Degradation — is devoted to finding economic mechanisms to help incentivize developing countries to protect their carbon rich forests. Despite all this, however, very little attention is given to the role of animals in the carbon cycle.
The researchers at UCSC used the current price of carbon on the European carbon market to estimate the value of sea otters in their study site in terms of the quantity of kelp that the otters protect. They found the carbon sequestered by the living kelp was worth somewhere between $205-$400 million.
While current carbon markets clearly aren’t evolved enough to pay for otter conservation, the hope is that one day there will be structures available to place the appropriate value on sea otters, which, at present, not only give us their cuteness for free, but also their services as guardians of the rain forests of the deep. |
Boreal peat sequesters a lot of carbon, keeping it from contributing to global warming. However, the common belief is that peat will release this huge amount of carbon as current climate warming continues, considerably worsening the situation. But according to a recent press release from the University of South Carolina, boreal peat may be holding onto its carbon after all.
A core sample from Canadian peat representing 7,500 years of climate was used for the analysis. The time period included both the Medieval Climate Anomaly and the Holocene Thermal Maximum, snapshots in time where the global temperature was 2 degrees Celsius greater than normal, similar to emerging climate change conditions today. The sample showed a significant increase in carbon release during dry spells or conditions where the peat had longer exposure to oxygen, but no significant increase in carbon release occurred under higher than normal temperatures.
Researchers felt it was not yet time to declare boreal peat’s carbon store as harmless to the currently warming environment, however, as significantly drier or more oxygenated conditions could emerge that would cause a large carbon release. |
Andrew Jackson (March 15, 1767 – June 8, 1845) was an American soldier and statesman who served as the seventh President of the United States from 1829 to 1837 and was the founder of the Democratic Party. Before being elected to the presidency, Jackson served in Congress and gained fame as a general in the United States Army. As president, Jackson sought to advance the rights of the “common man” against a “corrupt aristocracy” and to preserve the Union.
He became a practicing lawyer in Tennessee and in 1791 he married Rachel Donelson Robards. Jackson served briefly in the U.S. House of Representatives and the U.S. Senate. Upon returning to Tennessee, he was appointed a justice on the Tennessee Supreme Court, serving from 1798 until 1804. In 1801, Jackson was appointed colonel in the Tennessee militia, and was elected its commander the following year. He led Tennessee militia and U.S. Army regulars during the Creek War of 1813–1814, winning a major victory at the Battle of Horseshoe Bend. The subsequent Treaty of Fort Jackson required the Creek surrender of vast lands in present-day Alabama and Georgia. Jackson won a decisive victory in the War of 1812 over the British army at the Battle of New Orleans, making him a national hero. Following the conclusion of the War of 1812, Jackson led U.S. forces in the First Seminole War, which helped produce the Adams–Onís Treaty of 1819 and the transfer of Florida from Spain to the United States. Following the ratification of the treaty, Jackson briefly served as Florida’s first territorial governor before winning election as a U.S. Senator from Tennessee.
Jackson was a candidate for president in 1824 but, lacking a majority of electoral votes, lost the election in the House of Representatives to John Quincy Adams. In reaction to a “corrupt bargain” between opponents Adams and Henry Clay, Jackson’s supporters founded the Democratic Party. He ran again for president in 1828 against Adams and won in a landslide. As president, Jackson faced a threat of secession by South Carolina over the “Tariff of Abominations” enacted under Adams. The Nullification Crisis was defused when the tariff was amended and Jackson threatened the use of military force if South Carolina attempted to secede. Congress, led by Clay, attempted to reauthorize the Second Bank of the United States; Jackson regarded the Bank as a corrupt institution and vetoed the renewal of its charter. After a lengthy struggle, Jackson and the congressional Democrats thoroughly dismantled the Bank. In 1835, Jackson became the only president to completely pay off the national debt, fulfilling a longtime goal.
In foreign affairs, Jackson’s administration concluded a “most favored nation” treaty with Great Britain, settled U.S. claims of damages by France from the Napoleonic Wars, and recognized the Republic of Texas. His presidency marked the beginning of the ascendancy of the “spoils system” in American politics. In 1830, Jackson signed the Indian Removal Act, which relocated most members of the Native American tribes in the South to Indian Territory (now Oklahoma). The relocation process dispossessed the Indians and resulted in widespread death and sickness. In his retirement, Jackson remained active in Democratic Party politics, supporting the presidencies of Martin Van Buren and James K. Polk.
Jackson was widely revered in the United States, but his reputation has declined since the mid-20th century, largely due to his role in Indian removal and support for slavery. Surveys of historians and scholars have ranked Jackson between 6th and 18th most successful among United States presidents.
Song during mid interstitial:
“C-Funk” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License |
Fancy a cup of cosmic tea? This one isn't as calming as the ones on Earth. In a galaxy hosting a structure nicknamed the "Teacup," a galactic storm is raging.
The source of the cosmic squall is a supermassive black hole buried at the center of the galaxy, officially known as SDSS 1430+1339. As matter in the central regions of the galaxy is pulled toward the black hole, it is energized by the strong gravity and magnetic fields near the black hole. The infalling material produces more radiation than all the stars in the host galaxy. This kind of actively growing black hole is known as a quasar.
Located about 1.1 billion light years from Earth, the Teacup's host galaxy was originally discovered in visible light images by citizen scientists in 2007 as part of the Galaxy Zoo project, using data from the Sloan Digital Sky Survey. Since then, professional astronomers using space-based telescopes have gathered clues about the history of this galaxy with an eye toward forecasting how stormy it will be in the future. This new composite image contains X-ray data from Chandra (blue) along with an optical view from NASA's Hubble Space Telescope (red and green).
The "handle" of the Teacup is a ring of optical and X-ray light surrounding a giant bubble. This handle-shaped feature, which is located about 30,000 light-years from the supermassive black hole, was likely formed by one or more eruptions powered by the black hole. Radio emission — shown in a separate composite image with the optical data — also outlines this bubble, and a bubble about the same size on the other side of the black hole.
Previously, optical telescope observations showed that atoms in the handle of the Teacup were ionized, that is, these particles became charged when some of their electrons were stripped off, presumably by the quasar's strong radiation in the past. The amount of radiation required to ionize the atoms was compared with that inferred from optical observations of the quasar. This comparison suggested that the quasar's radiation production had diminished by a factor of somewhere between 50 and 600 over the last 40,000 to 100,000 years. This inferred sharp decline led researchers to conclude that the quasar in the Teacup was fading or dying.
New data from Chandra and ESA's XMM-Newton mission are giving astronomers an improved understanding of the history of this galactic storm. The X-ray spectra (that is, the amount of X-rays over a range of energies) show that the quasar is heavily obscured by gas. This implies that the quasar is producing much more ionizing radiation than indicated by the estimates based on the optical data alone, and that rumors of the quasar's death may have been exaggerated. Instead the quasar has dimmed by only a factor of 25 or less over the past 100,000 years.
The Chandra data also show evidence for hotter gas within the bubble, which may imply that a wind of material is blowing away from the black hole. Such a wind, which was driven by radiation from the quasar, may have created the bubbles found in the Teacup.
Astronomers have previously observed bubbles of various sizes in elliptical galaxies, galaxy groups and galaxy clusters that were generated by narrow jets containing particles traveling near the speed of light, that shoot away from the supermassive black holes. The energy of the jets dominates the power output of these black holes, rather than radiation.
In these jet-driven systems, astronomers have found that the power required to generate the bubbles is proportional to their X-ray brightness. Surprisingly, the radiation-driven Teacup quasar follows this pattern. This suggests radiation-dominated quasar systems and their jet-dominated cousins can have similar effects on their galactic surroundings.
A study describing these results was published in the March 20, 2018 issue of The Astrophysical Journal Letters and is available online. The authors are George Lansbury from the University of Cambridge in Cambridge, UK; Miranda E. Jarvis from the Max-Planck Institut für Astrophysik in Garching, Germany; Chris M. Harrison from the European Southern Observatory in Garching, Germany; David M. Alexander from Durham University in Durham, UK; Agnese Del Moro from the Max-Planck-Institut für Extraterrestrische Physik in Garching, Germany; Alastair Edge from Durham University in Durham, UK; James R. Mullaney from The University of Sheffield in Sheffield, UK and Alasdair Thomson from the University of Manchester, Manchester, UK.
NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations. |
Tank blanketing, also known as tank padding, is the injection of a gas into the open space of a liquid storage tank. It is to maintain a layer of gas above the liquid to prevent atmospheric gasses from entering the tank. Air in the tank contains oxygen, moisture, along with whatever else may be in the air. The type of gas used for blanketing is dependant on process, environmental constraints and available gas. Smaller tanks may be blanketed with nitrogen. Tanks used in the oil industry are blanketed with natural gas, when available.
Blanketing is used to remove Oxygen from the vapor space which will cause corrosion of the tank. Corrosion could lead to contamination of the process or failure of the tank. Additionally, by keeping an inert gas or natural gas blanket on the process conditions will not be right for combusion should there be a spark.
Tank padding works by setting an inlet valve to open when the tank vapor space drops below a preset pressure. This indicates that the tank level is dropping and a vacuum is forming. The valve will remain open until the set pressure is met at which point the valve will close. To capture gas that is displaced by rising fluid, a vapor recovery unit must be installed. |
The term fast Fourier transform (FFT) refers to an efficient implementation of the discrete Fourier transform (DFT) for highly compositeA.1 transform lengths . When computing the DFT as a set of inner products of length each, the computational complexity is . When is an integer power of 2, a Cooley-Tukey FFT algorithm delivers complexity , where denotes the log-base-2 of , and means ``on the order of ''. Such FFT algorithms were evidently first used by Gauss in 1805 and rediscovered in the 1960s by Cooley and Tukey .
In this appendix, a brief introduction is given for various FFT algorithms. A tutorial review (1990) is given in . Additionally, there are some excellent FFT ``home pages'':A.7.
Mixed-Radix Cooley-Tukey FFT
Decimation in Time
The DFT is defined by
When is even, the DFT summation can be split into sums over the
odd and even indexes of the input signal:
where and denote the even- and odd-indexed samples from . Thus, the length DFT is computable using two length DFTs. The complex factors are called twiddle factors. The splitting into sums over even and odd time indexes is called decimation in time. (For decimation in frequency, the inverse DFT of the spectrum is split into sums over even and odd bin numbers .)
Radix 2 FFT
When is a power of , say where is an integer, then the above DIT decomposition can be performed times, until each DFT is length . A length DFT requires no multiplies. The overall result is called a radix 2 FFT. A different radix 2 FFT is derived by performing decimation in frequency.
A split radix FFT is theoretically more efficient than a pure radix 2 algorithm [73,31] because it minimizes real arithmetic operations. The term ``split radix'' refers to a DIT decomposition that combines portions of one radix 2 and two radix 4 FFTs .A.3On modern general-purpose processors, however, computation time is often not minimized by minimizing the arithmetic operation count (see §A.7 below).
Radix 2 FFT Complexity is N Log N
Putting together the length DFT from the length- DFTs in a radix-2 FFT, the only multiplies needed are those used to combine two small DFTs to make a DFT twice as long, as in Eq.(A.1). Since there are approximately (complex) multiplies needed for each stage of the DIT decomposition, and only stages of DIT (where denotes the log-base-2 of ), we see that the total number of multiplies for a length DFT is reduced from to , where means ``on the order of ''. More precisely, a complexity of means that given any implementation of a length- radix-2 FFT, there exist a constant and integer such that the computational complexity satisfies
Fixed-Point FFTs and NFFTs
Recall (e.g., from Eq.(6.1)) that the inverse DFT requires a division by that the forward DFT does not. In fixed-point arithmetic (Appendix G), and when is a power of 2, dividing by may be carried out by right-shifting places in the binary word. Fixed-point implementations of the inverse Fast Fourier Transforms (FFT) (Appendix A) typically right-shift one place after each Butterfly stage. However, superior overall numerical performance may be obtained by right-shifting after every other butterfly stage , which corresponds to dividing both the forward and inverse FFT by (i.e., is implemented by half as many right shifts as dividing by ). Thus, in fixed-point, numerical performance can be improved, no extra work is required, and the normalization work (right-shifting) is spread equally between the forward and reverse transform, instead of concentrating all right-shifts in the inverse transform. The NDFT is therefore quite attractive for fixed-point implementations.
Because signal amplitude can grow by a factor of 2 from one butterfly stage to the next, an extra guard bit is needed for each pair of subsequent NDFT butterfly stages. Also note that if the DFT length is not a power of , the number of right-shifts in the forward and reverse transform must differ by one (because is odd instead of even).
Prime Factor Algorithm (PFA)
By the prime factorization theorem, every integer can be uniquely factored into a product of prime numbers raised to an integer power :
Rader's FFT Algorithm for Prime Lengths
Like Rader's FFT, Bluestein's FFT algorithm (also known as the chirp -transform algorithm), can be used to compute prime-length DFTs in operations [24, pp. 213-215].A.6 However, unlike Rader's FFT, Bluestein's algorithm is not restricted to prime lengths, and it can compute other kinds of transforms, as discussed further below.
Beginning with the DFT
where the ranges of given are those actually required by the convolution sum above. Beyond these required minimum ranges for , the sequences may be extended by zeros. As a result, we may implement this convolution (which is cyclic for even and ``negacyclic'' for odd ) using zero-padding and a larger cyclic convolution, as mentioned in §7.2.4. In particular, the larger cyclic convolution size may be chosen a power of 2, which need not be larger than . Within this larger cyclic convolution, the negative- indexes map to in the usual way.
Note that the sequence above consists of the original data sequence multiplied by a signal which can be interpreted as a sampled complex sinusoid with instantaneous normalized radian frequency , i.e., an instantaneous frequency that increases linearly with time. Such signals are called chirp signals. For this reason, Bluestein's algorithm is also called the chirp -transform algorithm .
In summary, Bluestein's FFT algorithm provides complexity for any positive integer DFT-length whatsoever, even when is prime.
Other adaptations of the Bluestein FFT algorithm can be used to compute a contiguous subset of DFT frequency samples (any uniformly spaced set of samples along the unit circle), with complexity. It can similarly compute samples of the transform along a sampled spiral of the form , where is any complex number, and , again with complexity .
Fast Transforms in Audio DSP
Since most audio signal processing applications benefit from zero padding (see §8.1), in which case we can always choose the FFT length to be a power of 2, there is almost never a need in practice for more ``exotic'' FFT algorithms than the basic ``pruned'' power-of-2 algorithms. (Here ``pruned'' means elimination of all unnecessary operations, such as when the input signal is real [74,21].)
An exception is when processing exactly periodic signals where the period is known to be an exact integer number of samples in length.A.8 In such a case, the DFT of one period of the waveform can be interpreted as a Fourier series (§B.3) of the periodic waveform, and unlike virtually all other practical spectrum analysis scenarios, spectral interpolation is not needed (or wanted). In the exactly periodic case, the spectrum is truly zero between adjacent harmonic frequencies, and the DFT of one period provides spectral samples only at the harmonic frequencies.
Adaptive FFT software (see §A.7 below) will choose the fastest algorithm available for any desired DFT length. Due to modern processor architectures, execution time is not normally minimized by minimizing arithmetic complexity .
There are alternatives to the Cooley-Tukey FFT which can serve the same or related purposes and which can have advantages in certain situations . Examples include the fast discrete cosine transform (DCT) , discrete Hartley transform , and number theoretic transform .
The DCT, used extensively in image coding, is described in §A.6.1 below. The Hartley transform, optimized for processing real signals, does not appear to have any advantages over a ``pruned real-only FFT'' . The number theoretic transform has special applicability for large-scale, high-precision calculations (§A.6.2 below).
The Discrete Cosine Transform (DCT)
In image coding (such as MPEG and JPEG), and many audio coding
algorithms (MPEG), the discrete cosine transform (DCT) is used
because of its nearly optimal asymptotic theoretical
coding gain.A.9For 1D signals, one of several DCT definitions (the one called
DCT-II)A.10is given by
For real signals, the real part of the DFT is a kind of DCT:
In practice, the DCT is normally implemented using the same basic efficiency techniques as in FFT algorithms. In Matlab and Octave (Octave-Forge), the functions dct and dct2 are available for the 1D and 2D cases, respectively.
Exercise: Using Euler's identity, expand the cosine in the DCT defined by Eq.(A.2) above into a sum of complex sinusoids, and show that the DCT can be rewritten as the sum of two phase-modulated DFTs:
The number theoretic transform is based on generalizing the th primitive root of unity (see §3.12) to a ``quotient ring'' instead of the usual field of complex numbers. Let denote a primitive th root of unity. We have been using in the field of complex numbers, and it of course satisfies , making it a root of unity; it also has the property that visits all of the ``DFT frequency points'' on the unit circle in the plane, as goes from 0 to .
In a number theory transform, is an integer which satisfies
When the number of elements in the transform is composite, a ``fast number theoretic transform'' may be constructed in the same manner as a fast Fourier transform is constructed from the DFT, or as the prime factor algorithm (or Winograd transform) is constructed for products of small mutually prime factors .
Unlike the DFT, the number theoretic transform does not transform to a meaningful ``frequency domain''. However, it has analogous theorems, such as the convolution theorem, enabling it to be used for fast convolutions and correlations like the various FFT algorithms.
An interesting feature of the number theory transform is that all computations are exact (integer multiplication and addition modulo a prime integer). There is no round-off error. This feature has been used to do fast convolutions to multiply extremely large numbers, such as are needed when computing to millions of digits of precision.
For those of us in signal processing research, the built-in fft function in Matlab (or Octave) is what we use almost all the time. It is adaptive in that it will choose the best algorithm available for the desired transform size.
For C or C++ applications, there are several highly optimized FFT variants in the FFTW package (``Fastest Fourier Transform in the West'') . FFTW is free for non-commercial or free-software applications under the terms of the GNU General Public License.
For embedded DSP applications (software running on special purpose DSP chips), consult your vendor's software libraries and support website for FFT algorithms written in optimized assembly language for your DSP hardware platform. Nearly all DSP chip vendors supply free FFT software (and other signal processing utilities) as a way of promoting their hardware.
Fourier Transforms for Continuous/Discrete Time/Frequency
Example Applications of the DFT |
by Susan Verner
Try These 7 Perfect Activities for Practicing the Present Perfect
What are some of the things your students have already accomplished at this time in their lives? Ask your students to share two or three things they have done that they are most proud of, and have them do it in front of the class. Allow the rest of the class to ask questions of each classmate after the presentation. Encourage your students to use the adverb ‘already’ in their presentations.
Have you ever?
Have each student write five sentences stating something he or she did in the past at a specific time. These sentences should be written in the simple past and include the time of the event. For example, a student might write ‘I walked my dog yesterday’. Then have students exchange papers and rewrite those sentences using the Present Perfect and the adverb ‘before’. They should also omit the time marker in the rewritten sentences. For example, ‘Hyun has walked his dog before’.
How many times since
How often do your students do daily activities like brushing their teeth, changing their clothes and eating a meal? Review with your class how to use the adverb ‘since’ and then ask them how many times they have done daily activities since yesterday, last week, last month and last year.
What do your students want to do that they have not done yet? Review with your class the proper use of the adverb ‘yet’ and then ask them to share with a partner three things they have not done yet that they would like to do.
This game gets your students moving while practicing the negative use of the present perfect. Arrange chairs facing into a circle for all but one of your students. That student stands in the middle and announces something he has never done using the present perfect. Anyone in the circle who has done that activity must get out of his or her seat and races to find a new seat. The person in the middle tries to sit in one of the empty seats as well. The person left standing after everyone else is sitting takes the next turn in the middle of the circle.
As a class, brainstorm every activity you have done or would like to do. You may want to explain the term ‘bucket list’ and encourage your students to think about what they would include on theirs. Then, let your students take turns asking if their classmates have done each of these activities. They should start with the phrase ‘have you ever’ and answer the questions with the present perfect. Encourage your students to share any surprising answers with the class after their discussion time is complete.
Since or For?
Since and for are often used with the present perfect to express a length of time a person has done a particular activity. Use ‘since’ when offering a specific time and ‘for’ for an amount of time. After reviewing this with your students, have groups of three or four practice using ‘since’ and ‘for’ with the present perfect.
Enjoyed this article and learned something? Please share it!
Get it now and start reading in 3 minutes!
Dramatically improve the way you teach:
Get the Entire BusyTeacher Library
Includes the best of BusyTeacher: all 80 of our PDF e-books. 4,036 pages filled with thousands of practical activities and tips that you can start using today. Instant download, 30-day money back guarantee.
Rate this article:
was this article helpful? |
Switch Energy, American Geosciences Institute
Video length is 2:15 min.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Video supports the Next Generation Science Standards»
Middle School: 2 Disciplinary Core Ideas
High School: 1 Disciplinary Core Idea
About Teaching Climate Literacy
7.3 Environmental quality.
4.1 Humans transfer and transform energy.
4.7 Different sources of energy have different benefits and drawbacks.
5.4 Economic factors.
5.6 Environmental factors.
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- The educator can engage students by assigning student groups to research the pros and cons of natural gas issues (fracking, greenhouse gas content, comparison to coal) and have students share their findings with the class.
About the Science
- Video merely introduces the pros and cons of natural gas and is not intended to present deep scientific concepts. It is supplemented by video interviews with energy experts across industry and government, representing researchers/scientists, business owners/CEOs, and governments officials.
- Pros and cons of fracking are introduced in the video.
- Passed initial science review - expert science review pending.
About the Pedagogy
- This resource does not come with a teacher’s guide. It could be used as additional information in an energy unit.
- The group of interviewees lack diversity; all the additional interviews are with white men.
Related URLs These related sites were noted by our reviewers but have not been reviewed by CLEANhttp://www.switchenergyproject.com
Next Generation Science Standards See how this Video supports:
Disciplinary Core Ideas: 2
MS-ESS3.A1:Humans depend on Earth’s land, ocean, atmosphere, and biosphere for many different resources. Minerals, fresh water, and biosphere resources are limited, and many are not renewable or replaceable over human lifetimes. These resources are distributed unevenly around the planet as a result of past geologic processes.
MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities.
Disciplinary Core Ideas: 1
HS-ESS3.A2:All forms of energy production and other resource extraction have associated economic, social, environmental, and geopolitical costs and risks as well as benefits. New technologies and social regulations can change the balance of these factors. |
Plant food, also known as plant fertilizer, gives plants and soils a boost of nutrients to improve the overall health of the plant. Encourage root growth, produce more blooms and fruits, and help plants fight disease with plant food. Our selection of plant food covers many nutrients your garden may need, from potassium to iron, allowing you to select the nutrients your garden needs most.
Plants get the majority of their nutrients from the soil. Sometimes, the soil is deficient in some nutrients, or has an overabundance of another. Each imbalance affects plant growth, yield, aesthetics, and overall health. Plants have primary and secondary nutrient needs, as well as micronutrient needs, and it’s important to know what nutrients your plants need and why, so you are better able to diagnose plant problems and give your plants the necessary nutrients. Here is a list of important plant nutrients and how they affect your plants.
Nitrogen: Nitrogen is a primary plant nutrient. It is a major factor of growth and specific nitrogen need varies from plant to plant. Nitrogen is always in flux in the soil, so even a soil test doesn’t quite give an accurate picture of your soil’s nitrogen content, so looking at plant health is a better measure for nitrogen levels.
Phosphorus: Another primary nutrient, phosphorus (measured in phosphate in plant food products) promotes plant growth and root formation, and is essential for flowering and fruiting. Deficiency symptoms can include sparse leaf growth, darkened leaf color, and dull surfaces.
Potassium: The last of the primary nutrients, it is found as potash in plant products. It helps in photosynthesis, aids in disease resistance, and is important in fruit quality and quantity. Deficiencies can cause yellowing, scorching, and crinkling of leaves, and too much potassium can cause a salt imbalance in the soil.
Secondary Nutrients: Each secondary nutrient aids plants in a specific way, and deficiencies affect plants individually. Calcium aids in plant vigor and proper growth, Magnesium is an essential part of chlorophyll and photosynthesis, and Sulfur aids in protein production and is crucial for crop yield and quality.
Micronutrients: There are several micronutrients: Boron, chlorine, copper, iron, manganese, molybdenum, and zinc, and are called micro since they are needed in very small amounts. Even though they’re needed in much smaller amounts than other plant nutrients, they are just as important to plant health. Micronutrients aid in formation of cells, enzymes, sugars, plant structures, and are crucial to many plant processes.
Plant Nutrient Deficiencies
Nutrient deficiencies can manifest themselves in many ways, and many look similar or occur at the same time. One deficiency can cause another, and all plants will adapt to changes in soil nutrients differently. You may be able to tell your plant is struggling in its current soil by the following symptoms:
Purple or red coloring
These symptoms can indicate nutrient deficiency, but you will have to do a bit more work to decode just what may be lacking in your soil. A soil test can give you the big picture of what your soil contains and what it needs.
Choosing Plant Food
There are an endless array of choices when it comes to plant food. While all aim to add nutrients or other benefits to the soil, each has a different goal. Some aid in overall soil health, like Neptune’s Harvest Organic Crab Shell which helps get rid of fungus and nematodes. Other products work for specific plants, like Fertilome Rose Food, and deliver plant-specific nutrients directly to the soil. Specific plant ailments can also be cured with plant food, like with Bonide Rot-Stop Tomato Blossom End Rot food. You can even choose specific plant nutrients to add directly to the soil for a certain effect, like Bonide Triple Super Phosphate for strong roots and hearty plant health. We carry any type of plant food you might need to create a healthy environment for your plants, giving them all the nutrients they need. |
Language Arts Department
Language Arts Literacy is the foundation of all learning. To succeed in learning within any area, students must know and be able to use the basic elements of Language Arts Literacy. These fundamentals will interact with and contribute to the students' abilities to think critically, formulate meaning, and solve problems.
It is the intent of our school's administration and instructional staff to prepare our students to effectively use language as the primary tool of thought and communication through their studies. Additionally, the students will be able to understand literature as the verbal expression of the human imagination and as a transmitter of culture and values. They will engage in fluent, appropriate speech and writing. Learning experiences and the utilization of technology will enable them to analyze, classify, compare, formulate hypothesis, make inferences, draw conclusions, and solve problems both rationally and intuitively.
Through the implementation of this curriculum, which adheres to the New Jersey Learning Standards, students will:
1. Actively read, listen to, view, and interact with materials that are diverse in content and form.
2. Compose texts that are varying in content and form for different audiences and purposes.
3. Speak and actively listen for a variety of situations.
4. View, understand, and use non-textual visual information and representations for critical comparison, analysis, and evaluation.
5. Understand and apply the integration of reading, writing, listening, speaking and viewing to support comprehension, and effective communications.
6. Acquire the habits of inquiry necessary to become thinkers and learners.
Through the pursuit of this vision, students will develop a love for the written word. In preparing students for the future, students will read for learning and pleasure, write for meaning, and communicate for purpose and understanding. |
A list of numbers (like the column of test scores in your spreadsheet) isn’t very useful in itself. Sure, you might be able to pick out some patterns just from scanning the numbers, but that isn’t a very efficient or accurate way to understand what’s going on, and it’s certainly tricky to compare a general pattern to other lists of numbers.
As a teacher, you might want to know how your students (or specific groups of students) scored on a specific homework assignment overall, or how their grades are changing over time. Alternatively, you might want to compare how individual students are performing relative to one another on the whole.
In any of these cases, we’re looking for an “average.” There are three main ways that people find the average of a list of numbers: mean, median, and mode.
- The mean is the most commonly used measure of “average”; in fact, the two terms are often used interchangeably. It is calculated as the sum of all the numbers in the list divided by the length of the list. For example, to find the mean of the list “5, 3, 1, 4”:
- First, sum up all values in the list: 5 + 3 + 1 + 4 = 13.
- Next count the number of values in the list: 4.
- Finally, divide the result of step 1 (sum) by the result of step 2 (count): 13 / 4 = 3.25.
- The median is simply the middle number in a sorted list. If there are two middle numbers, the median is halfway between those two. For example, to find the median of the list “5, 3, 1, 4” (same as before):
- First, sort the list: 1, 3, 4, 5.
- Next, identify the middle number or numbers. Since the example list contains an even number of values, there are two numbers in the middle: 3 and 4. (If we had only a single middle number, we could just stop here.)
- Since there are two middle values, calculate their mean: (3 + 4) / 2 = 7 / 2 = 3.5.
- The mode is simply the most commonly occurring number in the list. This measurement is far less useful than mean and median, so we won’t spend any more time on it.
If you want a wonderful, more rigorous explanation of these three along with some practice exercises, check out the masterful Sal Khan’s lessons on “Measures of Central Tendency”.
But how do I know which one to use?
I mentioned that the mean is the most commonly used way to calculate averages, so why not always use that? In general, that’s not a bad idea, as the mean does a nice job of taking into account the magnitude (or “largeness”) of values. And the mean gives you a very useful result if your data set looks like a bell curve (which it normally does—no pun intended).
Of course, data is often messy, and so-called “outliers” that sit outside the pack can greatly skew the result of a mean calculation, giving you a less intuitive or useful result. But the median is far less likely to be affected. For example, recall our earlier example of the list “5, 3, 1, 4”. The mean and median we calculated were 3.25 and 3.5, respectively—values that are pretty close to one another.
But consider a list of four numbers where one number is far off from the other three, for example “5, 3, 1, 100”. In this case:
- The mean is sum / count, or (5 + 3 + 1 + 100) / 4 = 27.25.
- The median is the mean of the middle two numbers from the sorted list (3 and 5 in the list “1, 3, 5, 100”), or (3 + 5) / 2 = 4.
Unlike the previous example, there’s a pretty big difference between the mean and median in this case. And you’ll see a similar pattern in general when your data set includes outliers. So as a general rule:
Use median instead of mean when the list of values includes a few numbers that are much larger or much smaller than most of the others.
Exercise: Mean or median?
Click this link to see two groups of student data. Determine whether calculating the mean or median test score would be appropriate for the two data sets. To let you practice without being tempted with the correct response, we’ll start off the next lesson with an answer key.
- Mean and median are the best ways we have to describe the middle of our data. Tweet
- Use mean most of the time, but median when your data includes outliers. Tweet
- Impress your friends by saying “mean” instead of “average” from now on! Tweet
As with any summary statistic, mean, median, and mode don’t tell you everything about a list of numbers. But they do give you a compact way to summarize a list of numbers. We’ll take a look at doing these calculations in the spreadsheet in the next lesson. |
How White Parents Can Use Media to Raise Anti-Racist Kids
Parents of Black and brown kids know that instilling their kids with a sense of racial identity and talking about how racism will inevitably affect their lives -- and possibly even their safety -- are essential life lessons. Parents of White kids, on the other hand, often don't feel the same pressure. But as racist violence continues to erupt, discussing race, racism, and the history of racial oppression in the United States and the world is just as essential for White families. These are not easy conversations to have, but movies, TV, and books -- as well as other media and tech -- can be powerful tools to help you get started. Remember, media makes a big impression on kids. But the messages you send -- from the media you choose, to the conversations you initiate -- are what kids will hold in their hearts and minds long after the final credits.
Here are 10 ideas for how to use media to start and continue conversations about race and racism with your kids. This list is not exhaustive, so if you have other ideas, please add them to the comments.
Diversify your bookshelf
If you grew up reading Anne of Green Gables and Little House on the Prairie, you can still share these classics with your kids. But don't stop there: Look for stories featuring and written by people of color. Here are some places to start:
Point out racism in movies, TV, and games
It can be easy to let stereotypes fly by when watching the minstrel-show crows in Dumbo or exaggerated accents in The Goonies. But by pointing out when something is racist, you're helping your kid develop critical-thinking skills. These skills will allow conversations about race and stereotypes to deepen as kids get older.
Watch hard stuff
As kids get older, expose them to the harsh realities of racism throughout history and through the current day. That doesn't mean nonstop cable news replaying gruesome details of violence but carefully chosen films like The 13th or McFarland, USA. You can also watch footage of protests to kick off conversations about anger, fear, oppression, and power. Be explicit about racism and discrimination being hurtful, damaging, and wrong.
Seek out media created by people of color
As you choose your family movie night pick or browse online for books, specifically look for authors and directors of color. Aim for stories that include people of color in lead roles and as fully developed characters. With older kids, take an audit of how many movies or books you've recently watched or read that were created by people of color. Discuss the reasons for any imbalance and the importance of a variety of perspectives.
Broaden your own perspectives
Follow and read Black and brown voices and media outlets. Use what you learn to inform conversations with your kids. Some places to start (and by no means a complete list):
- Ava DuVernay (Twitter)
- Brittany Packnett Cunningham
- Franchesca Ramsey
- The Grio
- Ibram X. Kendi
- Indian Country Today
- Ricefeed (Instagram)
- The Root
Discuss hate speech and harassment online
Ask kids if they've seen racist language in YouTube videos or comments. For social media-using kids, talk about racist memes. Ask them to show you examples and aim to develop empathy without shaming them. Help them understand how following or sharing racist accounts helps spread hate. Brainstorm ways they can safely and responsibly speak out against racist imagery and messages online. Adapt this lesson on countering hate speech for your conversations.
Understand the online landscape
Read this account of a White mom parenting through her son's exposure to online white supremacy. And read the son's perspective. Learn more about places where White racist groups congregate and how they recruit, and keep discussions open and honest with kids who socialize on sites like Discord and Reddit.
Explore the power of tech tools
Use recent examples of how phones, video recordings, and editing tools affect our understanding of race and racism. Discuss how the release of video evidence can spur action, like in the case of Ahmaud Arbery. Explore together how photos and videos can both reveal truth and hide it -- especially when context is edited out.
Build news literacy
Besides sharing news articles from different perspectives with your kids, use opportunities like protests in Minneapolis to discuss how news is presented. What kinds of stories get the most attention? How are language and images used differently to depict people and incidents depending on the news outlet, the people involved, and the topic? Look at news coverage of incidents where White people commit acts of violence and compare to when people of color do.
Teach your kid to be an ally
Learn about how White people can support people of color by being allies and then integrate these ideas into your conversations and actions with your kids. Talk through scenarios your kid might encounter online and discuss (and model) when it might be best to just listen, to call someone out, to amplify someone's voice, to share resources, etc. Share mistakes you've made around talking about race and racism -- in person or online -- with your kids so they know it's OK to not be perfect or have all the answers.
Related Advice & Top Picks
- Most Discussed
- Most Shared |
New Math Book for Teachers by Mike Flynn
MLP Director Mike Flynn recently published a math book for teachers titled Beyond Answers: Exploring Mathematical Practices with Young Children. Order your copy today!
“Anyone curious about children’s mathematical creativity and the potential for math exploration in the classroom will enjoy reading this accessible book.” –Deborah Schifter
The Standards for Mathematical Practice are written in clear, concise language. Even so, to interpret them and visualize what they mean for your teaching practice isn’t always easy. In this practical, easy-to-read book, Mike Flynn provides teachers with a clear and deep sense of these standards and shares ideas on how best to implement them in K–2 classrooms.
Each chapter is dedicated to a different practice. Using examples from his own teaching and vignettes from many other K–2 teachers, Mike does the following:
- Invites you to break the cycle of teaching math procedurally
- Demonstrates what it means for children to understand—not just do—math
- Explores what it looks like when young children embrace the important behaviors espoused by the practices
The book’s extensive collection of stories from K–2 classroom provides readers with glimpses of classroom dialogue, teacher reflections, and examples of student work. Focus questions at the beginning of each vignette help you analyze the examples and encourage further reflection.
Beyond Answers is a wonderful resource that can be used by individual teachers, study groups, professional development staff, and in math methods courses. |
Medical Definition of Deformity, cauliflower-ear
Deformity, cauliflower-ear: Destruction of the underlying cartilage framework of the outer ear (pinnae), usually caused by either infection or trauma, resulting in a thickening of the ear. Classically, blood collects (hematoma) between the ear cartilage and the skin. There is a marked thickening of the entire ear which may be so extensive that the shape of the ear becomes unrecognizable. The ear is said to look like a piece of cauliflower. It is typically seen in wrestlers and boxers who have had repeated trauma to the ear.
When trauma causes a blood clot under the skin of the ear, the clot disrupts the connection of the skin to the ear cartilage. The cartilage has no other blood supply except the overlying skin so, if the skin is separated from the cartilage, the cartilage is deprived of nutrients and dies and the ear cartilage shrivels up to form the classic cauliflower ear.
The treatment of the hematoma (the blood clot) is to drain it through an incision in the ear and apply a compressive dressing to sandwich the two sides of the skin against the cartilage.
When treated promptly and aggressively, the development of cauliflower ear deformity is unlikely. Delay in diagnosis and treatment leads to more difficulty in managing this problem and may leave greater ear deformity.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 6/9/2016
Medical Dictionary Definitions A - Z
Search Medical Dictionary |
Growing in dense tufts, the desert grass (Stipagrostis plumosa) has many erect culms (the hollow, jointed stem of a grass or sedge), encased in woolly sheaths. The ligules between the leaf blade and the sheath have a fringe of hairs, and the leaf blades are curled, coming to sharp point at the tip (2)(3). The inflorescence of the desert grass is a specialised, leafless branch system, borne along the main stem. The flowers are known as ‘spikelets’ and are greatly reduced, surrounded by two scale-like bracts(4).
The desert grass generally flowers between February and July (7). The spikelets have both male and female reproductive structures, and the florets (the small, reduced flowers) open for just a few hours when mature to allow wind pollination(4). The desert grass has specialised roots enclosed in a ‘rhizosheath’, where the root hairs and sand grains form a casing around the roots which is held together by a sticky, glue-like mucilage. The rhizosheath structure allows the desert grass to absorb water much more efficiently from the surrounding environment, and also promotes the growth of nitrogen-fixing bacteria, which produce nitrogen compounds that can be used by the plant (9). The desert grass plays an important ecological role in arid environments by stabilising the sandy substrates through the accumulation of drifting sand around the tussocks (6)(9).
There are no known threats to the desert grass; however, throughout the United Arab Emirates large plots of land have been developed to cater for the rapidly expanding human population, with significant negative impacts on much of the native vegetation (9).
The reproductive shoot of a plant, which bears a group or cluster of flowers.
In grasses and sedges, an outgrowth from the inner junction of a grass leaf sheath and blade, often membranous, sometimes a fringe of hairs. In other plants, may refer to any strap-like appendage.
The transfer of pollen grains from the stamen (male part of a flower) to the stigma (female part of a flower) of a flowering plant. This usually leads to fertilisation, the development of seeds and, eventually, a new plant.
Embed this ARKive thumbnail link ("portlet") by copying and pasting the code below. |
Emily Dickinson - BiographyEmily Elizabeth Dickinson was born 10 December, 1830, in Amherst, Mass., into a severely religious, puritanical family that had lived in New England for eight generations. She was educated at Amherst Academy and at Mount Holyoke Female Seminary, South Hadley, Mass. According to traditional accounts, Dickinson was a high-spirited and active young woman, but after suffering a romantic disappointment she withdrew from society and lived thereafter as a recluse. Virtually her only contact with her friends was through her whimsical and epigrammatic letters.
Throughout the remainder of her life Dickinson wrote poetry of a profoundly original nature. The first contemporary literary figure to become aware of her existence as a poet was clergyman and author Thomas Higginson. Although Higginson recognised her genius and became her lifelong correspondent and literary mentor, he advised her not to publish her work because of its violation of literary convention. Her other literary friend, the novelist Helen Jackson, however, tried unsuccessfully to persuade her to publish a collection of her poetry. After Dickinson's death, nearly 2,000 poems, many only fragments, were found among her papers. From this mass of material Higginson and Mabel Todd edited the first published selection of her works, Poems, which enjoyed great popular success. Todd never spoke to Dickinson, but glimpsed her once through a doorway, flitting by in white, the only colour Dickinson wore in her later years.
Dickinson's poems, compressed into brief stanza forms, are most frequently written in a few different combinations of iambic tetrameter and trimeter lines. She employed simple rhyme schemes and varied the effects of these schemes by partial rhyming, a device common among many 20th-century poets. Her language is simple, but she draws remarkable connotations from many common words, sometimes with almost pedantic exactness. Her imagery and metaphors were drawn both from an acute observation of nature and from an imagination often as playful in thought and witty in expression as that of the English metaphysical poets of the 17th century.
The combination of universal themes, expressed with vivid personal feeling and familiar verse forms gives Dickinson's lyrics a mystical directness comparable to that found in the work of the British poet William Blake. Her published works include Poems: Second Series, Poems: Third Series, The Single Hound, and Letters of Emily Dickinson. Emily Dickinson died 15 May, 1886.
Contributed by Gifford, Katya |
This ancient Egyptian trinket may not look like much, but it hides a very interesting story. Researchers have found that the 5,000-year-old iron bead is actually made from a meteorite.
Archaeologists have found iron objects in ancient Egypt, dating them to 2-3 millennia BC. But the earliest evidence of smelting only appeared much later after that, so how could they obtain these objects?
The result, published on 20 May in Meteoritics & Planetary Science, not only details this spectacular object, but also explains how ancient Egyptians obtained iron millennia before the earliest evidence of iron smelting in the region, solving the long standing archaeological mystery. It could also suggest (though that’s still debatable) that they regarded meteorites highly as they developed their religion.
“The sky was very important to the ancient Egyptians,” says Joyce Tyldesley, an Egyptologist at the University of Manchester, UK, and a co-author of the paper. “Something that falls from the sky is going to be considered as a gift from the gods.”
Using microscopy and computed tomography, Diane Johnson, a meteorite scientist at the Open University in Milton Keynes, UK, and her colleagues analyzed the object. Microscopy alone showed that it has a content in nickel of over 30%, which alone suggests that it came from a meteorite. acking up this result, the team observed that the metal had a distinctive crystalline structure called a Widmanstätten pattern. Widmanstätten patterns, also called Thomson structures, are unique figures of long nickel-iron crystals found only in meteorites.
But they took things one step further – using computed tomography (CT scan), they found that the object was created by hammering a fragment of iron from the meteorite into a thin plate, then bending it into a tube. They then re-created a 3D model of the object.
So what does this mean for the entire Egyptian culture? The object is dated 3,300 BC, and the first signs of smelting occur almost 3 millennia after, in 600 BC. It is known that back then, iron was associated with royalty and even dinivity. So where do meteorites stand? Some archaeologists believe Egyptians thought of them as fragments from the gods, descending from the sky as gifts. But was this technique common, or was it nothing more than an accident?
Johnson says that she would love to check other iron artefacts, but it remains to be seen if museums will actually allow her to do so – hopefully, they will.
Reference: Nature doi:10.1038/nature.2013.13091 |
The American Civil War changed the United States in multiple ways. African Americans were emancipated and became citizens of the United States. Thus, began a new phase of freedom struggle for equal rights and integration into American society. There also another journey to rediscover the African roots of formerly enslaved people. African Americans fought in the Civil War to gain their freedom. The national crisis was the best chance to end slavery once and for all. Dr. John Henrik Clarke explains the impact of the Civil War and the aftermath. Immigration and slavery was expanding in the West. The Mexican War had added new territories, which caused more political factionalism. Bloody Kansas also revealed class issues in the US. Some poor whites in the state did not want see slavery in Kansas, because it would not benefit them economically.The North did not want slave labor competing with immigrant labor. The planter aristocracy wanted to maintain compete economic domination of the South. It was only a matter of time before the country tore itself apart. Simultaneously, the enslaved with escaping slavery by means of the Underground Railroad. After the war African Americans faced segregation, terrorism, and disenfranchisement. These new challenges required multiple methods of organization to combat. |
Radial engines are used in aircrafts having propeller connected to the shaft delivering power in order to produce thrust its basic mechanism is as follows
Steam engine Principle
Steam engine once used in locomotives was based on the reciprocating principle as shown below
Maltese Cross Mechanism
this type of mechanism is used in clocks to power the second hand movement.
Manual Transmission Mechanism
The mechanism also called as “stick shift” is used in cars to change gears mannually
Constant Velocity Joint
This mechanism is used in the front wheel drive cars
Torpedo-Boat destroyer System
This system is used to destroy fleet in naval military operations.
Also called as Wankel engine is a type of internal combustion engine has a unique design that converts pressure into rotating motion instead of reciprocating pistons |
Ancestral Puebloans: The Southwest American Indians
– Ancestral Puebloans: The Southwest American Indians “Man corn”, warfare and atlatls were not the only interesting aspects of the Anasazi culture. The history and lifestyles of the Ancestral Puebloans may have contributed to their mysterious disappearance. Their societies were more complex than most humans realize. The Anasazi, or to be politically correct, the Ancestral Puebloans, traveled to the Southwest from Mexico around 100 A.D. (Southwest Indian Relief Council, 2001). The word “Anasazi” originated from the Navajo word that translates to “ancestral enemies.” The name was changed from Anasazi to Ancestral Puebloans so that their ancestors today do not take offense to the history of the…
Flourishing North American Cultures
– 2000 years before Europeans began to arrive in the New World, the last era of the pre-Columbian development began. North American cultures such as the Mississippian culture, the Hopewell Tradition, and the Hohokam culture experienced growth and environmental adaptation throughout this era. Major contributions and innovations of Native Americans have developed and been passed on through generations of ancestors. Originating in 700 A.D., the Mississippian culture expanded through the Mississippi Valley and out into the southeastern states of Alabama, Georgia and Florida….
The Pikes Peak Gold Rush and Civil and Indian Wars
– … The propaganda and promotional books printed for the gold rush sated that there were very few problem between the emigrants and the Natives. But after a couple of years, the Indians started realizing that the miners were hunting all their game and using all their resources. They were starting to starve and in order to survive; they would have to raid the homes and the wagons that were coming into Colorado. The raids started to cause fear in the travelers and settlers which led to the Army’s organized attacks on the natives that included the Sand Creek Massacre where there were about 500 Cheyenne and Arapaho men, women, and children killed…. |
The study of humans has long relied on bones to reveal human DNA. The problem is that those bones are hard to come by. As the Atlantic points out, scientists have only a finger bone and two teeth belonging to the Denisovans, cousins of Neanderthals. It's no wonder then that a Harvard geneticist refers to a new technique of recovering human DNA without bones, described in a study published in Science Thursday, as a "real revolution in technology," per the New York Times. German researchers took dirt samples at seven cave sites in Europe and Asia where Neanderthals or Denisovans once lived. Four returned Neanderthal DNA, and one of the four sites contained Denisovan DNA, per a release, which notes many of the sediment samples were taken from archaeological layers or sites that hadn't previously yielded bones.
"It's a bit like discovering that you can extract gold dust from the air," as one geneticist puts it. Researchers had previously taken animal DNA from sediment, but this study describes the first successful effort involving human DNA. It involved collecting samples at sites where human bones or tools had been found and using molecules that recognize mammalian mitochondrial DNA to "fish out" the material, which sticks to minerals in sediment. The implications are huge, say scientists, and one gives this potential use: probing the soil along the routes early Americans took to get here via Alaska. Study co-author Svante Pääbo sees a future in which the technique becomes "routine." (Perhaps the technique could be of use in proving or disproving this highly contested claim.) |
ClassAction materials were created with the following ideas in mind:
- Astronomy is an extremly visual subject. Astronomy instruction typically involves images, diagrams, tables of data, and occasionally animations. There is no reason that interactive classroom materials should be any different. Thus, the majority of our questions incorporate all of these same visual sources of information used in astronomy instruction.
- Students learn through feedback and iteration. Most questions can be transformed by the instructor into slighly different versions. When students do poorly on a question and instruction must be given for students to understand the answer, it is useful to follow-up with a similar question over the same concept to determine if student understanding has improved.
- Science courses should be practical. Many astronomy concepts show up in everyday life (and many don't). But whenever possible we have tried to relate astronomical concepts to their everyday counterparts and those elements of astronomy that students actually observe.
- Misconceptions are rampant and well-entrenched among astronomy students. There are no miracle cures. We can only provide instructors with questions to identify misconeptions and ammunition and encouragement to overcome them. Considerable resources are provided to help the instructor provide feedback to the questions. These include outlines of the subject matter (think powerpoint slides), images, and interactive animations. In some questions hints and solutions are provided. These are powerful tools in the hands of an instructor sensitive to student misconceptions and motivated to combat them.
ClassAction modules were designed based on the following principles:
- ClassAction materials are varied and plentiful. All instructors teach astronomy differently and it is important to provide a wide range of materials in both content and difficulty level. If we have erred, it is on the side of inclusion. We hope that every instructor will be able to choose some subset of our materials appropriate for their approach to instruction.
- Most college level instructors are extremely busy and have very little time to develop instructional materials. Thus, we have endeavored to make ClassAction materials as convenient as possible.
A more thorough introduction to ClassAction pedagogy may be found in this document (pdf). |
The University Record, December 17, 1997
By Sally Pobojewski
News and Information Services
A new 300-site survey of borehole temperatures spanning four continents and five centuries has confirmed what most scientists already believe---the Earth is getting warmer and the rate of warming has been accelerating rapidly since 1900.
"In terms of climate change, the 20th century has not been just another century," says Henry N. Pollack, professor of geological sciences. "Subsurface rock temperatures confirm that the average global surface temperature has increased about 1 degree C. (1.8 degrees F.) over the last five centuries with one-half of that warming taking place in the last 100 years. The 20th century is the warmest and has experienced the fastest rate of warming of any of the five centuries in our study."
According to Pollack, 80 percent of the total 1 degree C. warming recorded in borehole readings from 1500 to the present occurred after 1750, when people began large-scale burning of coal, wood and other fossil fuels during the Industrial Revolution. Si nce most warming has taken place after 1750, Pollack believes it is likely a direct result of human activity, rather than a natural climate fluctuation.
"If the upward trend of greenhouse gas emissions continues, we can expect another 1 degree C. increase in average global temperature by 2050," Pollack says. "This estimate is not based on model computations, but a projection of actual data. Our results agree with the estimates of global climate warming issued by the United Nations' Intergovernmental Panel on Climate Change (IPCC) and are fully consistent with the conclusion of the IPCC's scientific panel that human activity is a significant driving for ce behind global warming."
Pollack presented temperature readings from 300 underground boreholes in Europe, North America, Australia and South Africa at the American Geophysical Union meeting held in San Francisco last week.
Pollack is one of several geologists who take the Earth's temperature by lowering sensitive thermometers into boreholes drilled from the surface. Because subsurface rocks preserve a record of actual surface temperature changes over time, boreholes are a n important data source for scientists studying global climate change. Short-term changes, such as seasonal variations, penetrate only a few meters underground. Long-term changes on scales of hundreds of years are preserved at greater depths. Since met eorological data has been recorded globally only for the last 100 years or so, borehole temperatures are especially important in determining surface temperature for previous centuries.
Individual borehole temperatures can be skewed by local topography or climate conditions, so Pollack and Shaopeng Huang, assistant research scientist, merged the readings into continental data ensembles to balance out local effects and let regional trend s come through. They then combined all four regions to get a global average. Because meteorologists track long-term climate changes in 100-year intervals, Pollack and Huang also looked for century-long trends in borehole data.
When they compared the average worldwide borehole temperature change with global meteorological records over the last century, they found both recorded a 0.5 degree C. average global temperature increase since 1900. "The ground says the same thing the a ir says," Pollack explains.
Pollack's study has been funded by the National Science Foundation and the Czech-USA Cooperative Science Program. |
No one can deny that pi (π, the ratio of the circumference of a circle to its diameter) is a useful constant—drafted into service every day in furniture workshops, in precision toolmaking, and in middle-school and high-school mathematics classes around the world. π is used to calculate the volumes of spheres (such as weather balloons and volleyballs) and cylinders (like grain silos and cups). The cult status of this little irrational number (abbreviated 3.14 or 22/7) is so significant that March 14 (3.14) is celebrated as “Pi Day” annually. But what about other single letters, Greek and otherwise, that serve as valuable mathematical and scientific tools? Aren’t they just as important as pi? It depends on whom you talk to, of course. The following is a short list of less-famous but commonly used single-letter constants and variables.
G or “Big G”
(or “Big G”) is called the gravitational constant or Newton’s constant. It is a quantity whose numerical value depends on the physical units of length, mass, and time used to help determine the size of the gravitational force
between two objects in space. G
was first used by Sir Isaac Newton
to figure gravitational force, but it was first calculated by British natural philosopher and experimentalist Henry Cavendish
during his efforts to determine the mass of Earth. Big G
is a bit of a misnomer, however, since it is very, very small, only 6.67 x 10−11
Delta (Δ or d)
As any student of calculus
knows, delta (Δ or d) signifies change in the quality or the amount of something. In ecology
, the calculus equation dN
(which could also be written ΔN
, with N
equal to the number of individuals in a population
equal to a given point in time) is often used to determine the rate of growth in a population. In chemistry, Δ is used to represent a change in temperature (ΔT
) or a change in the amount of energy (ΔE
) in a reaction.
Rho (ρ or r)
Rho (ρ or r) is probably best known for its use in correlation
coefficients—that is, in statistical operations that try to quantify the relationship (or association
) between two variables, such as between height and weight or between surface area and volume. Pearson’s correlation coefficient, r
, is one type of correlation coefficient. It measures the strength of the linear relationship between two variables on a continuous scale between the values of −1 through +1. Values of −1 or +1 indicate a perfect linear relationship between the two variables, whereas a value of 0 indicates no linear relationship. The Spearman rank-order correlation coefficient, rs
, measures the strength of the association between one variable and members of a set of variables. For example, rs
could be used to rank order, and thus prioritize, the risk of a set of health threats to a community.
The Greek letter lambda (λ) is used often in physics, atmospheric science, climatology, and botany with respect to light
. Lambda denotes wavelength
—that is, the distance between corresponding points of two consecutive waves. “Corresponding points” refers to two points or particles in the same phase—i.e., points that have completed identical fractions of their periodic motion. Wavelength (λ) is equal to the speed (v) of a wave train in a medium divided by its frequency (f): λ = v/f.
The imaginary number (i)
can be thought of as “normal” numbers that can be expressed. Real numbers include whole numbers (that is, full-unit counting numbers, such as 1, 2, and 3), rational numbers (that is, numbers that can be expressed as fractions and decimals), and irrational numbers (that is, numbers that cannot be written as a ratio or quotient of two integers, such as π or e). In contrast, imaginary numbers
are more complex; they involve the symbol i
, or √(−1). i
can be used to represent the square root
of a negative number. Since i
= √(−1), then the √(−16) can be represented as 4i
. These kinds of operations may be used to simplify the mathematical interpretation in electrical engineering—such as representing the amount of current and the amplitude of an electrical oscillation in signal processing.
The Stefan-Boltzmann constant (σ)
When physicists are trying to calculate the amount of surface radiation a planet or other celestial body emits for a given period of time, they use the Stefan-Boltzmann law
. This law states that the total radiant heat energy emitted from a surface is proportional to the fourth power of its absolute temperature. In the equation E
, where E
is the amount of radiant heat energy and T
is the absolute temperature in Kelvin
, the Greek letter sigma (σ) represents the constant of proportionality, called the Stefan-Boltzmann constant. This constant has the value 5.6704 × 10−8
watt per meter2
, where K4
is temperature in Kelvin raised to the fourth power. The law applies only to blackbodies—that is, theoretical physical bodies that absorb all incident heat radiation. Blackbodies are also known as “perfect” or “ideal” emitters, since they are said to emit all of the radiation they absorb. When looking at a real-world surface, creating a model of a perfect emitter by using the Stefan-Boltzmann law serves as a valuable comparative tool for physicists when they attempt to estimate the surface temperatures of stars
, and other objects.
The natural logarithm (e)
is the exponent or power to which a base must be raised to yield a given number. The natural, or Napierian, logarithm (with base e
≅ 2.71828 [which is an irrational number
] and written ln n) is a useful function in mathematics, with applications to mathematical models throughout the physical and biological sciences. The natural logarithm, e
, is often used to measure the time it takes for something to get to a certain level, such as how long it would take for a small population of lemmings
to grow into a group of one million individuals or how many years a sample of plutonium
will take to decay to a safe level. |
Chronic kidney disease (CKD) prevents your kidneys from filtering blood properly.
The main job of each kidney is to filter waste and excess water out of your blood to make urine. They help maintain the body's chemical balance, control blood pressure and make hormones. Damaged kidneys can cause waste to build up in your body as well as other health problems.
Diabetes and high blood pressure are the most common causes of chronic kidney disease. Other causes include:
- Arteriosclerosis or hardening of the arteries
- Blockage in the urinary system
- Chronic infections, such as pyelonephritis, a kidney infection that often spreads from bacteria in the bladder
- Cirrhosis or scarring of the liver
- Collagen diseases, such as:
- Lupus, which occurs when the immune system attacks healthy cells and tissues by mistake
- Scleroderma, abnormal growth of connective tissue in skin or body organs
- Congenital abnormalities
- Cystic kidney disease
- Drug abuse or other poisons
- Glomerulonephritis, a type of kidney disease that damages the kidneys so they are unable to filter waste and fluids from the blood
- Heart problems, such as heart disease or heart failure
- Kidney stones
Chronic kidney disease occurs over a long period of time. Although it cannot be stopped or cured, you and your healthcare team can work together to slow its progress.
If the kidneys do not start working on their own, dialysis or other treatment may be needed. Your doctor may also want to start tracking a kidney function measure called the Estimated Glomerular Filtration Rate (eGFR), an approximate calculation of the amount of kidney function that remains.
It is important to track your baseline eGFR and changes in this score over time. Ask your health care team about what is normal for you.
A doctor specializing in kidney disease, called a nephrologist, will help with your treatment.
Treatment may include medicines to lower blood pressure, control blood glucose and lower blood cholesterol.
You can take steps to keep your kidneys healthier longer:
- Choose foods with less salt (sodium)
- Keep your blood pressure below 130/80
- Keep your blood glucose in the target range, if you have diabetes
If you are being treated for kidney disease, contact your doctor if you see any changes to your condition or if any of these signs worsen:
- Swelling in the hands, face or feet
- Itching of the skin
- Nausea or vomiting
- Loss of appetite
- Changes in urination
- Headache and confusion
- Fatigue and weakness, which may be due to anemia
- Feeling short of breath
Why seek treatment at Ohio State
Ohio State is recognized by U.S.News & World Report as one of the nation's best hospitals for urology and nephrology. Schedule an appointment with Ohio State's urology and kidney experts. |
Understanding Attachment Part I
1. Attachment is an innate process by which a new-born enhances his or her chances of survival by responding to any threat or insecurity by seeking out, monitoring the behaviors of and aiming to maintain closeness to his or her protective caregiver (called the attachment figure, aka Mom or Dad).
2. You can observe this process in a little baby that will cry if and when his or her attachment figure (Mom or Dad) leaves the room or goes out of sight. When the attachment figure (Mom or Dad) returns and engages with the baby – the baby will usually settle and calm back down.
3. This biological drive to maintain closeness increases the chances that the little child will receive enough food, clothing, physical touch and care to grow and thrive.
4. In time this process of honing in on and doing whatever it takes to ensure the presence of the attachment figure (Mom or Dad) applies not only to the physical needs of the child, but also to emotional needs.
5. What this means is that when a child is under stress, in danger or under duress he or she will seek out and flee to the attachment figure (Mom or Dad) for safety: “Attachment is the interactive regulation of emotion…[it is] to seek the ‘safe haven’ of a stronger or wiser other when we are threatened with danger” (Wallin, 2007, p. 301). |
- A new study assessing the cumulative impacts of human activities on wildlife found that the vast majority of terrestrial species are facing “intense” pressure due to humanity’s footprint across the globe.
- Researchers looked at human pressures across the ranges of 20,529 terrestrial vertebrates and found that 85 percent of the animals included in the study, or some 17,517 species, are exposed to “intense human pressures” in half of their range. About 16 percent, or 3,328 species, are exposed to these pressures throughout their entire range.
- The researchers say that their results could help improve assessments of species’ vulnerability to extinction.
A new study assessing the cumulative impacts of human activities on wildlife found that the vast majority of terrestrial species are facing “intense” pressure due to humanity’s footprint across the globe.
A research team led by Christopher O’Bryan of Australia’s University of Queensland looked at human pressures across the ranges of 20,529 terrestrial vertebrates and published their results in the journal Global Ecology and Conservation.
The researchers found that 85 percent of the animals included in the study, or some 17,517 species, are exposed to “intense human pressures” in at least half of their range. About 16 percent, or 3,328 species, are exposed to these pressures throughout their entire range.
“Threatened terrestrial vertebrates and species with small ranges are disproportionately exposed to intense human pressure,” O’Bryan and team write in the study. They add that even many species assessed by the IUCN Red List to be of “least concern” in regards to extinction risks are facing severe pressures from human activities: “Our analysis also suggests that there are at least 2478 species considered ‘least concern’ that have considerable portions of their range overlapping with these pressures, which may indicate their risk of decline.”
A growing body of evidence has documented how land-use changes, such as urbanization and conversion to pastureland or agricultural land, and other human activities, like infrastructure development, poaching, and over-hunting, are driving wildlife population numbers down. According to the authors of the study, however, previous research into habitat availability has typically focused on vegetation intactness, which does not take into account the cumulative threats that can impact species even when their habitat appears to be intact.
To get the full picture on how humanity is affecting the world’s wildlife, O’Bryan and colleagues used a dataset called the Human Footprint, “a cumulative human pressure assessment that includes data on roads, built environments, human population density, railways, navigable waterways, pasturelands, and croplands, at a 1-km2 resolution globally.” They add: “The Human Footprint is the most comprehensive global human pressure dataset available, and given the nature of the input data, captures the greatest number of drivers of species declines (e.g. agricultural activity, urban development, transportation, energy production, and system modification), and has been shown to explain extinction risk in globally threatened vertebrates.”
O’Bryan and team used the Human Footprint to quantify the proportion of ranges facing intense human pressure for 10,745 species of birds, 4,592 mammals, 5,000 amphibians, and 192 reptiles — all major terrestrial taxonomic groups whose distributions and extinction risks have already been comprehensively assessed. They determined that “all taxonomic classes are experiencing intense human pressure across the majority of their range,” with nearly 40% of amphibians, 15% of mammals, and 12% of birds having no portion of their range that is free of intense human pressures.
“Our work shows that a large proportion of terrestrial vertebrates have nowhere to hide from human pressures ranging from pastureland and agriculture all the way to extreme urban conglomerates,” O’Bryan said in a statement.
The Human Footprint data doesn’t capture many pressures resulting from human activities that affect wildlife directly, such as global climate change, pollution, over-exploitation, and introduction of invasive species to their habitat, so the researchers note that their results represent “a conservative estimate of pressure” facing terrestrial vertebrates.
The researchers say that their results could help improve assessments of species’ vulnerability to extinction — for instance, aiding in attempts to measure progress against 2020 Aichi Biodiversity Targets like Target 12, which calls for preventing extinctions and improving the conservation status of species in decline, and Target 5, which calls for halving habitat degradation and fragmentation.
“Given the growing human influence on the planet, time and space are running out for biodiversity, and we need to prioritize actions against these intense human pressures,” study co-author James Watson of WCS (Wildlife Conservation Society) and the University of Queensland said in a statement. “Using cumulative human pressure data, we can identify areas that are at higher risk and where conservation action is immediately needed to ensure wildlife has enough range to persist.”
• O’Bryan, C. J., Allan, J. R., Holden, M., Sanderson, C., Venter, O., Di Marco, M., … & Watson, J. E. (2020). Intense human pressure is widespread across terrestrial vertebrate ranges. Global Ecology and Conservation, 21, e00882. doi:10.1016/j.gecco.2019.e00882
FEEDBACK: Use this form to send a message to the author of this post. If you want to post a public comment, you can do that at the bottom of the page. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.