content
stringlengths
275
370k
This is a number tracing worksheet you can download so your youngsters can practice drawing the numbers 1 – 10. These make a great accompaniment to our FREE online counting games which you will find HERE and our counting videos which you will find HERE. These worksheets contain a light version of the letter with arrows to help young ones know the direction to draw the letters. This helps build muscle memory and they will soon learn how to form the numbers 1 – 10. How to Use The Number Trace Worksheets Number tracing must be fun – so give your little ones brightly colored crayons or pencils to draw the numbers. Encourage them to work on a couple of numbers at a time and check out our numbers worksheets which have complimentary activities.. More Numeracy Free Printables We have lots more free printable worksheets which you can download and help your young ones practice key numeracy skills at home. You will find them HERE. How to Print the Worksheets You will see there are two versions below. We have made these free worksheets in both a4 size and letter size, so you can print them with ease onto the paper you have to hand. Download the ‘How Many’ Worksheets Help us spread the work by pinning the image below to Pinterest (and sharing anywhere else you can think of 😉 ). The more we spread the word, the more games we can make for you to play here at Numboodles!
Question No : 1 A router has two Fast Ethernet interfaces and needs to connect to four VLANs in the local network. How can you accomplish this task, using the fewest physical interfaces and without decreasing http://www.lead4pass.com/200-125.html network performance? A. Use a hub to connect the four VLANS with a Fast Ethernet interface on the router. B. Add a second router to handle the VLAN traffic. C. Add two more Fast Ethernet interfaces. D. Implement a router-on-a-stick configuration. A router on a stick allows you to use sub-interfaces to create multiple logical networks on a single physical interface. Question No : 2 Refer to the graphic. Host A is communicating with the server. What will be the source MAC address of the frames received by Host A from the server? A. the MAC address of router interface e0 B. the MAC address of router interface e1 C. the MAC address of the server network interface D. the MAC address of host A Whereas switches can only examine and forward 200-125 packets based on the contents of the MAC header, routers can look further into the packet to discover the network for which a packet is destined. Routers make forwarding decisions based on the packet’s networklayer header (such as an IPX header or IP header). These network-layer headers contain source and destination network addresses. Local devices address packets to the router’s MAC address in the MAC header. After receiving the packets, the router must perform the following steps: 1. Check the incoming packet for corruption, and remove the MAC header. The router checks the packet for MAC-layer errors. The router then strips off the MAC header and examines the network-layer header to determine what to do with the packet. 2. Examine the age of the packet. The router must ensure that the packet has not come too far to be forwarded. For example, IPX headers contain a hop count. By default, 15 hops is the maximum number of hops (or routers) that a packet can cross. If a packet has a hop count of 15, the router discards the packet. IP headers contain a Time to Live (TTL) value. Unlike the IPX hop count, which increments as the packet is forwarded through each router, the IP TTL value decrements as the IP packet is forwarded through each router. If an IP packet has a TTL value of 1, the router discards the packet. A router cannot decrement the TTL value to 1 and then forward the packet. 3. Determine the route to the destination. Routers maintain a routing table that lists available networks, the direction to the desired network (the outgoing interface number), and the distance to those networks. After determining which direction to forward the packet, the router must build a new header. (If you want to read the IP routing tables on a Windows 95/98 workstation, type ROUTE PRINT in the DOS box.) 4. Build the new MAC header and forward the packet. Finally, the router builds a new MAC header for the packet. The MAC header includes the router’s MAC address and the 200-125 final destination’s MAC address or the MAC address of the next router in the path. Question No : 3 Refer to the exhibit: What will Router1 do when it receives the data frame shown? (Choose three.) A. Router1 will strip off the source MAC address and replace it with the MAC address 0000.0c36.6965. B. Router1 will strip off the source IP address and replace it with the IP address 192.168.40.1. C. Router1 will strip off the destination MAC address and replace it with the MAC address 0000.0c07.4320. D. Router1 will strip off the destination IP address and replace it with the IP address of 192.168.40.1. E. Router1 will forward the data packet out interface FastEthernet0/1. F. Router1 will forward the data packet out interface FastEthernet0/2. Remember, the source and destination MAC changes as each router hop along with the TTL being decremented but the source and destination IP address remain the same from source to 200-125 destination. Question No : 4 Which three statements accurately describe Layer 2 Ethernet switches? (Choose three.) A. Spanning Tree Protocol allows switches to automatically share VLAN information. B. Establishing VLANs increases the number of broadcast domains. C. Switches that are configured with VLANs make forwarding decisions based on both Layer 2 and Layer 3 address information. D. Microsegmentation decreases the number of collisions on the network. E. In a properly functioning network with redundant switched paths, each switched segment will contain one root bridge with all its ports in the forwarding state. All other switches in that broadcast domain will have only one root port. F. If a switch receives a frame for an unknown destination, it uses ARP to resolve the address. Microsegmentation is a network design (functionality) where each workstation or device on a network gets its own dedicated segment (collision domain) to the http://www.lead4pass.com/200-125.html switch. Each network device gets the full bandwidth of the segment and does not have to share the segment with other devices. Microsegmentation reduces and can even eliminate collisions because each segment is its own collision domain ->. Note: Microsegmentation decreases the number of collisions but it increases the number of collision domains. Click here to download free pdf files: https://drive.google.com/open?id=0B7LFs7RuvDV4X2VtU1ZHYkxpR2c Watch the video to learn more:
Crews move young bull trout trapped in pools by dewatering upriver where water remains year round. Fish out of Water Scientists think that Kachess River always dewatered to some extent. The timing of Kachess bull trout traveling up to spawn is much later than their relatives in Box Canyon and Gold Creek. However, the extent of dewatering is likely much worse today than it was in the past due to the combination of human impacts and climate change. This means less habitat for young rearing bull trout, who spend 3 to 4 years growing larger before moving to lakes. The process of dewatering takes place rapidly and young bull trout are not strong swimmers. As a result, they become trapped in pools and sections of stream that go dry and they die. Solving the Kachess Rubik’s Cube We can’t solve the problem of dewatering entirely since it is tied to the dam. However, we can find ways to retain and create better habitat for bull trout and other species. The first step is gathering information about the upper Kachess watershed. There is a world of possibilities to help bull trout including: - Placing wood in strategic locations to create refuge during high flows so young fish aren’t pushed down river into the stretch that dewaters. - Constructing log jams that provide pools with cover that can last throughout the summer for those that are pushed downstream - Identifying the previous locations of tributaries before they were cut off by the road and reconnect them. - Identifying relic channels that are lower than the current flood plain and directing flow there We want to know the most effective ways of restoring the river, and that is where data comes in. Gathering data from gravel counts, ground surveys, LiDAR, flow and groundwater measurements, etc. will direct us where we need to go. For example, LiDAR data will provide us with insights into human impacts over the past 100 years, which we can then correct. The confluence of Mineral Creek (left) and Kachess River (right). Interesting fact: Mineral Creek contributes significantly more water than Kachess, but is much less utilized by bull trout when spawning. Kugel, Scott, “ABANDONED MINE LAND IMPACTS ON TRIBUTARIES IN THE UPPER YAKIMA RIVER WATERSHED, EASTERN CASCADES, WASHINGTON” (2018). All Master’s Theses. 939.
The Montessori Method Science or Philosophy? Although we sometimes hear the word “philosophy” applied to Montessori education, it is not a set of beliefs, but rather a scientific method, an approach to the child which has as its core a fundamental respect for the abilities with which each child is endowed. Dr. Montessori was a scientist and a physician. When she opened her first Children’s House in 1907 very little work had been done in the field of early child development. Because of her background, Montessori used scientific techniques to watch children as they worked and played. She drew conclusions, made adjustments depending upon what she had seen, and observed again. Every piece of equipment and every activity she developed was a result of watching children’s natural development. Moreover, her conclusions were drawn from observations taken from numerous schools, in more than one country, over a long period of time. Central to Montessori is the observation that children build themselves using what is available in their environment. Young children learn in a multi-sensory fashion, not by just watching or listening. Between the ages of birth and six they also have an enormous capacity to learn with an apparent lack of effort. Montessori called this the period “The Absorbent Mind.” Montessori also noted times in children’s development where they could learn certain skills easier, so-called “sensitive periods.” Once these periods were over, learning came with more struggle, and sometimes not at all. She concluded that any effective form of education should incorporate and take advantage of these natural cycles of learning. Some sensitive periods lasted for years, others she speculated only for a period of days. Those she clearly identified included sensitive periods for movement, learning by touch, language, order, imagination and socialization. Perhaps one of the most dramatic observations Montessori made concerned the ability of young children to concentrate. Up to that time young children were considered to have very brief attention spans. She noticed, however, that when deeply interested in a carefully designed material a child could focus in such a way that he or she seemed to shut out every distraction – what we would call today being in the “zone” or “flow.” Moreover she noticed that children who repeatedly experienced this kind of concentration were not only joyous learners, but exhibited an inner calm and self reliance she became convinced was the natural state of childhood. She called this “normalization.” Montessori observed and believed in the ability of each child. However, she also knew that children didn’t develop their full potential in a void. They required a carefully prepared developmental environment upon which they could act. It was this environment in combination with the adult and Montessori’s educational materials which formed the foundation for her method. And it is the interaction of the child, the adult, and the environment which make her approach so eminently successful, not just as an alternate form of schooling, but as an authentic education for life.
What Are the Liberal Arts? A liberal arts education means studying broadly — taking classes in many different subjects — and building skills that are geared toward more than just one profession. By studying the liberal arts, students develop strong critical thinking, problem solving, and communication skills. Liberal arts students learn to approach questions flexibly and to think across multiple disciplines. These are skills employers say they value most, even more than a specific major. In today’s labor market, career paths are changing rapidly, and graduates must draw from a variety of skill sets to adapt to challenges and capitalize on opportunities.
picture of the day Tiny craters in Meridiani Planum. Credit: NASA/JPL Mars Exploration Oct 12, 2007 Mars in Miniature Mars exhibits many formations whose shapes are independent of size. Could the scalar nature of electrical discharges be Thunderbolts Picture of the Day articles discussed the large dune fields on Mars, the channels carved into them and the craters with which they are associated. In many instances where standard geological and astrophysical theories have come to no conclusions, we have concluded that electricity is the one unifying factor that explains how they all may have formed. Most of the structures have been examined through the use of satellite imagery returned from orbital cameras, so there has been a need to look at the surface more closely. The Mars Exploration Rover (MER) B, Opportunity, has been surveying the Martian terrain for more than three years. On its way to the rim of Victoria crater, it rolled through fields of dark dunes and white, polygonal blocks of stone. The stone blocks have been dubbed "cobbles" or "pavement" because they are so flat compared to the undulating piles of gravel that surround them. The flat stones are unique in split in regular polygons with that are most often filled with hematite "blueberries." exhibit fractures that in concentric arcs from what appear to be hollow impact appear to have been roughly etched, or eroded away on top, but the cracks have edges that are sometimes razor-sharp. Many are to have been sliced off at ground level from composed of the same material. The big chunks also contain blueberries in great concentration. In the image at the top of the page, there is a small crater visible in the edge of a dune with another even smaller version further in the distance. The crater in the foreground is less than half a meter in diameter. The one in the background is less than three centimeters deep. and undistorted with rounded rims and no blast debris, so they can't be micro-meteor impacts. the foreground crater are dark streaks. The edges of the dune are scalloped and striated. A closer look reveals that they are covered in small look compacted and solid, rather than wind-blown and left by the MER have well-defined edges as if they rolled through damp sand. The grains are relatively large and uniform in size and are mostly iron oxide. are layered with light and dark bands. There are bright edges on many of the small ridges that lead down to It is unusual that dark hematite is so intimately bound up with white silicon-dioxide rock. Could there be a connection between silica and hematite? Could the same electric arcs that are thought to have carved the Red Planet transmute elements - reforming the atomic structure of silicon (with 28 particles in its nucleus) into that of iron (with 56)? space-based images of Mars there are craters measuring hundreds of kilometers in diameter with dune fields 800 meters high that have identical structure to these one-meter ripples. On Mars the large and the small, as well as the light and the dark are starkly defined. Are they two results of one cause? By Stephen Smith Please visit our The Electric Sky and The Electric Universe
Children model how they see adults act. They are more inclined to do what we do instead of doing what we say. This is especially true in terms of how we treat others and how we model empathy and compassion. We believe that social awareness and empathy can be taught - that children can learn how to understand the feelings of others, even if they come from diverse backgrounds or traditions. To foster this skill in our children, we often have to block out a lot of the noise of the outside world and find opportunities to model and embrace empathy. Social awareness is among the five core competencies of Social Emotional Learning (SEL), something we have been teaching our students about for a number of years. At Eanes ISD, this skill is the foundation upon everything else we teach our students. Social awareness is the district focus during the third nine weeks of each school year. Specifically, a healthy social awareness is about: - Taking others’ perspectives - Recognizing strengths in others - Demonstrating empathy and compassion - Showing concern for the feelings of others - Understanding and expressing gratitude - Identifying diverse social norms, including unjust ones - Recognizing situational demands and opportunities - Understanding the influences of organizations and systems on behaviors The primary goal of the DEI (Diversity, Equity, and Inclusion) initiative in Eanes ISD is to help build a school community of trust by encouraging an understanding of and appreciation of our differences. We want our students to understand that we are all unique in our own way and that it is our differences that make us the remarkable people we are. By treating each other with understanding, respect, and kindness, we have the opportunity to enrich our own lives by learning about people who are different from us. What can each of us do to promote social awareness and empathy? While this is difficult during a pandemic when we may not be able to interact as much with each other in the broader community, some of us have more time than usual together as families. Helping our children talk about and process what they are seeing and reading about in the news and on social media is an important part of their developing a healthy understanding of and compassion for others. Helping them understand how their actions towards others may either contribute to someone feeling loved and accepted or make them feel marginalized and inferior. Each of us can be a role model for a child, whether or not we have children of our own. In the words of Brene Brown, “First and foremost, we need to be the adults we want our children to be...we should model the kindness we want to see.” Please check out these links with more ideas on how to teach your child to become more socially aware and empathetic.
Depending on the recipe, most baked goods require the use of eggs. Believe it or not, eggs serve a crucial role in many recipes, but cooking the perfect egg often proves to be a difficult task. This one simple ingredients has a series of characteristics, which can very depending on the method used to cook the eggs. Factors that affect the cooking time of eggs: Age of the eggs: There is a specific bonding between the inner membrane of the eggs and the egg whites. This bond can be specifically be seen in fresh eggs from a farm, which is why they are often difficult to boil/cook. These type of eggs can be best used for frying. On the other hand, this bond eventually breaks down overtime, making it easier to boil/cook eggs found at a supermarket. Protein Bonding Temperature: Egg whites contain certain proteins, that can bond together. These bonds affect the appearance and structure of the egg, as these bonds can lead to the rubbery texture of the egg white that can sometimes be found in hard-boiled eggs. Between 30-140°F, the proteins of the egg whites expand. Above 140°F, the proteins will bond and after 155°F, the proteins will solidify. At around 180°F, the proteins will bond together, giving the opaque and firm characteristics of the egg white. Any temperature above 180°F may lead to the release of hydrogen sulfide, which is responsible for the smell of rotten eggs, as well as the dark green-grayish compound located between the egg white and yolk if overcooked. Egg Yolk Temperature: The egg yolk is mainly composed of fatty acids, cholesterol, and some proteins. Because of this, different temperatures affects its performance. An egg cooked at any temperature below 145°F will have no affect on the yolk. Once reaching a temperature around 160°F, the yolk will become firm but it will still retain its bright color. Any temperature over 170°F will cause the yolk to turn a pale yellow, and it will have a crumbly consistency. This will result in its chalky texture and it will also release ferrous sulfide, which is also responsible for the smell of rotten eggs. Altitude: Altitude also plays a key role in cooking an egg as altitude affects the boiling temperature of water. According to the U.S Department of Agriculture (USDA), the boiling temperature at a point above 2,000 feet is around 208°F, lower than the ground level boiling temperature of 212°F. Due to this difference, it will take more time to properly cook an egg at higher levels than that of lower levels. On the other hand, there are also many ways to prepare eggs, with each method having its own scientific background to it. Science Behind Various Methods of Preparing Eggs: Heating: When heating eggs, the egg-white proteins move around and collide with water molecules. Due to this, weak bonds may break, causing the egg white proteins to uncurl and collide with other proteins that have uncurled as well. This results in new chemical bonds, connecting different proteins to one another. The breaking of bonds and formation of new bonds allow for the egg white proteins to form a series of interconnected proteins. These bonds are responsible for the ability of the egg whites to develop a rubbery texture, which was previously discussed in the “Protein Bonding Temperature” paragraph located earlier in this post. Beating/whipping: Beating or whipping eggs exposes air bubbles to egg whites. This exposure allows the unfolding of the egg proteins just as heating would unfold these proteins. Egg-white proteins consist of both hydrophilic and hydrophobic amino acids. In other words, some amino acids are attracted to water, while others are repelled by it. Prior to uncurling, the hydrophobic amino acids are located in the center away from the water, while the hydrophilic amino acids are located closer to the water. When the egg-white protein encounters an air bubble, part of the protein is exposed to both air and water. This causes the protein to uncurl so the hydrophilic and hydrophobic amino acids can be located in its desired respected area. This allows the amino acids to bond with each other, creating a series of bonds that hold the air bubbles in space. When heating the air bubbles, the gas inside them expands. Often, the area around the bubble solidifies, and the structure usually does not collapse when the bubbles burst. The protein that lines the outside of the air bubbles is known as lecithin. This is what prevents them from collapsing when baking. This allows for the consistency found in a soufflé or meringue. The more whipped the egg whites are, the more stiff they will become. On the other hand, unbeaten egg whites often allow the lecithin as a binder, which hold the cake together. Here’s an interesting fun fact: There is a myth stating that copper bowls are better for whipping eggs. There is some actual scientific support to this myth, proving that it is true. The copper ions from the bowl combine with conalbumin, one of the proteins found in eggs. This combination forms a bond that is stronger than the protein itself, making it less likely for the egg-white proteins to unfold. The copper could also react with sulfur-containing groups on other proteins found in eggs, making the egg proteins even more stable. If a copper bowl is not used, ingredients such as cream of tartar or vinegar can be used to produce a similar effect. Mixing: Many recipes call for the mixing of oil-based and water-based liquids. However, these two are immiscible and do not interact with one another. Because of this, egg yolks are often used to create an emulsion. Egg yolks contain a number of emulsifiers, with some of them being hydrophobic and others being hydrophilic. Because of this, thoroughly mixing egg proteins with oil and water will allow part of the protein to attract the water and another part to attract the oil. Just like egg-whites, egg yolks contain lecithin. The lecithin, which is a phospholipid, also acts as an emulsifier. Due to its structure (see figure below), it has a hydrophilic head and a hydrophobic tail. The tail gets attracted to the oil, while the head gets attracted to the water. These important characteristics of egg proteins play a crucial role in making foods such as mayonnaise, which require the mixing of water and oil. On a side note, eggs can also be used as moisteners (instead of using water) and a good source of fat and amino acids. It can also be used as a glaze as a source of protein for the Maillard Reaction. So next time you bake, don’t forget to note the complex structure and characteristics of an egg…it is because of this that eggs are used in various methods of cooking. Not only does an egg serve for taste, but it also plays a vital role in the texture or appearance of certain kinds of food. Author: Erica Rowane Bautista
The French conjugation Conjugation is the whole forms a verb can take. |What is conjugation?| |Auxiliaries and semi-auxiliaries| |Active and passive forms| |Le radical et la terminaison| How to remember tense building? Depending of verb and its use, accent, dash or spelling change can appear. Here are the main rules to know those changes. Conjugate the verb One of the main difficulties in French conjugation is to agree verb. Either auxiliary or suject or verb type can determine it. |Conjugation with subject| |Conjugate past participle| |Auxiliairy avoir and être| |Past participle followed by infinitive (advanced)| |Conjugation with pronoun form (advanced)| |Pronoun on (advanced)| Use the correct tense When to use the following tenses?
Reliable and easy-to-use Consistent and accurate Magnetic damping is an important concept in electromagnetism. The mechanical effect of magnetic damping has very important applications. This experimental apparatus is designed to measure the uniform sliding speed of a magnetic slide on a non-ferromagnetic conductor inclined rail. Through data processing, magnetic damping coefficient and sliding friction coefficient are acquired. This experimental apparatus is related to physics concepts such as mechanics and electromagnetism. This apparatus has the following advantages: 1. It has reliable design and convenient angle adjustment. 2. Experimental data has good repeatability and consistency, and the experimental error is small. 3. The incline angle can be easily calculated by directly reading the length of one side of the triangle from the scale on the horizontal support (the other two sides are known). 4. The timer is intelligent and can store 10 counts of timing data. 5. It can be used for general physics experiments in colleges. The instruction manual contains experimental configurations, principles, step-by-step instructions, and examples of experiment results. Please click Experiment Theory and Contents to find more information about this apparatus. |Inclined rail||Range of adjustable angle: 0 °~ 90 °| |Length: 1.1 m| |Length at junction: 0.44 m| |Adjusting support||Length: 0.63 m| |Counting timer||Counting: 10 times (storage)| |Timing range: 0.000-9.999 s; resolution: 0.001 s| |Magnetic slide||Dimension: diameter=18 mm; thickness= 6 mm| |Mass: 11.07 g|
Dracunculiasis (guinea worm disease) is caused by the nematode (roundworm) Dracunculus medinensis. The Centers for Disease Control and Prevention (CDC) has provided the following answers to questions about the organism and the disease: What is dracunculiasis? Dracunculiasis, more commonly known as Guinea worm disease (GWD), is a preventable infection caused by the parasite Dracunculus medinensis. Infection affects poor communities in remote parts of Africa that do not have safe water to drink. Currently, many organizations, including The Global 2000 program of The Carter Center of Emory University, UNICEF, Centers for Disease Control and Prevention (CDC), and the World Health Organization (WHO) are helping the last 5 countries in the world (Sudan, Ghana, Mali, Niger, and Nigeria) to eradicate the disease. Since 1986, when an estimated 3.5 million people were infected annually, the campaign has eliminated much of the disease]]. In 2007, only 9,585 cases of GWD were reported. Most of those cases were from Sudan (61%) and Ghana (35%). All affected countries are aiming to eliminate Guinea worm disease as soon as possible. How does Guinea worm disease spread? Approximately 1 year after a person drinks contaminated water, the adult female Guinea worm emerges from the skin of the infected person. Persons with worms protruding through the skin may enter sources of drinking water and unwittingly allow the worm to release larvae into the water. These larvae are ingested by microscopic copepods (tiny "water fleas") that live in these water sources. Persons become infected by drinking water containing the water fleas harboring the Guinea worm larvae. Once ingested, the stomach acid digests the water fleas, but not the Guinea worm larvae. These larvae find their way to the small intestine, where they penetrate the wall of the intestine and pass into the body cavity. During the next 10-14 months, the female Guinea worm larvae grow into full size adults, 60-100 centimeters (2-3 feet) long and as wide as a cooked spaghetti noodle. These adult female worms then migrate and emerge from the skin anywhere on the body, but usually on the lower limbs. A blister develops on the skin at the site where the worm will emerge. This blister causes a very painful burning sensation and it ruptures within 24-72 hours. Immersion of the affected limb into water helps relieve the pain but it also triggers the Guinea worm to release a milky white liquid containing millions of immature larvae into the water, thus contaminating the water supply and starting the cycle over again. For several days after it has emerged from the ulcer, the female Guinea worm is capable of releasing more larvae whenever it comes in contact with water. What are the signs and symptoms of Guinea worm disease? Infected persons do not usually have symptoms until about one year after they become infected. A few days to hours before the worm emerges, the person may develop a fever, swelling, and pain in the area. More than 90% of the worms appear on the legs and feet, but may occur anywhere on the body. People, in remote, rural communities who are most commonly affected by Guinea worm disease (GWD) frequently do not have access to medical care. Emergence of the adult female worm can be very painful, slow, and disabling. Frequently, the skin lesions caused by the worm develop secondary bacterial infections, which exacerbate the pain, and extend the period of incapacitation to weeks or months. Sometimes permanent disability results if joints are infected and become locked. What is the treatment for Guinea worm disease? There is no drug to treat Guinea worm disease (GWD) and no vaccine to prevent infection. Once the worm emerges from the wound, it can only be pulled out a few centimeters each day and wrapped around a piece of gauze or small stick. Sometimes the worm can be pulled out completely within a few days, but this process usually takes weeks or months. Analgesics, such as aspirin or ibuprofen, can help reduce swelling; antibiotic ointment can help prevent bacterial infections. The worm can also be surgically removed by a trained doctor in a medical facility before an ulcer forms. Where is Guinea worm disease found? Dracunculiasis now occurs only in 5 countries in sub-Saharan Africa. Transmission of the disease is most common in very remote rural villages and in areas visited by nomadic groups. In 2007, the two most endemic countries, Sudan and Ghana, reported 9,173; 5,815 and 3,358 cases of Guinea worm disease (GWD), respectively. Other endemic countries reporting cases of GWD in 2007 were: Mali (313 cases), Nigeria (73 cases), and Niger (14 cases). Asia is now free of the disease. Transmission of GWD no longer occurs in several African countries, including Benin, Burkina Faso, Cameroon, Central African Republic, Chad, Cote d'Ivoire, Ethiopia, Kenya, Mauritania, Senegal, Togo, and Uganda. No locally acquired cases of disease have been reported in these countries in the last year or more. The treat of case importations from the remaining endemic countries requires that surveillance be maintained in formerly endemic areas until offical certification. The World Health Organization has certified 180 countries free of transmission of dracunculiasis, including six formerly endemic countries: Pakistan (in 1996), India (in 2000), Senegal and Yemen (in 2004), Central African Republic and Cameroon (in 2007). Who is at risk for infection? Anyone who drinks standing pond water contaminated by persons with GWD is at risk for infection. People who live in villages where the infection is common are at greatest risk. Is Guinea worm disease a serious illness? Yes. The disease causes preventable suffering for infected persons and is a heavy economic and social burden for affected communities. Emgerence of the adult female worms can be very painful, slow, and disabling. Parents who have active Guinea worm disease may not be able to care for their children. They are also prevented from working in their fields and tending their animals. Because worm emergence usually occurs during planting and harvesting season, heavy crop losses may result leading to financial problems for the entire family. Children may be required to work the fields or tend animals in place of their disabled parents, preventing them from attending school. Therefore, GWD is both a disease of poverty and also a cause of poverty because of the disability it causes. Is a person immune to Guinea worm disease once he or she has it? No. Infection does not produce immunity, and many people in affected villages suffer disease year after year. How can Guinea worm disease be prevented? Because GWD can only be transmitted via drinking contaminated water, educating people to follow these simple control measures can completely prevent illness and eliminate transmission of the disease: - Drink only water from underground sources (such as from borehole or hand-dug wells) free from contamination. - Prevent persons with an open Guinea worm ulcer from entering ponds and wells used for drinking water. - Always filter drinking water, using a cloth filter, to remove the water fleas. Additionally, unsafe sources of drinking water can be treated with an approved larvicide, such as ABATE®*, that kills copepods, and communities can be provided with new safe sources of drinking water, or have existing dysfunctional ones repaired. - Use of trade names is for identification only and does not imply endorsement by the Public Health Service or by the U.S. Department of Health and Human Services. - This fact sheet is for information only and is not meant to be used for self-diagnosis or as a substitute for consultation with a health care provider. If you have any questions about the disease described above or think that you may have a parasitic infection, consult a health care provider. Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Centers for Disease Control and Prevention (CDC). Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Centers for Disease Control and Prevention (CDC) should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.
The distribution of organisms within a community can often be determined by the degree of plasticity or degree of specialization of resource acquisition. Resource acquisition is often based on the morphology of an organism, behavior, or a combination of both. Performance tests of feeding can identify the possible interactions that allow one species to better exploit a prey item. Scavenging behaviors in the presence or absence of a competitor were investigated by quantifying prey selection in a trophic generalist, spiny dogfish Squalus acanthias, and a trophic specialist, smooth-hounds Mustelus canis, in order to determine if each shark scavenged according to its jaw morphology. The diet of dogfish consists of small fishes, squid, ctenophores, and bivalves; they are expected to be nonselective predators. Smooth-hounds primarily feed on crustaceans; therefore, they are predicted to select crabs over other prey types. Prey selection was quantified by ranking each prey item according to the order it was consumed. Dietary shifts were analyzed by comparing the percentage of each prey item selected during solitary versus competitive scavenging. When scavenging alone, dogfish prefer herring and squid, which are easily handled by the cutting dentition of dogfish. Dogfish shift their diet to include a greater number of prey types when scavenging with a competitor. Smooth-hounds scavenge on squid, herring, and shrimp when alone, but increase the number of crabs in the diet when scavenging competitively. Competition causes smooth-hounds to scavenge according to their jaw morphology and locomotor abilities, which enables them to feed on a specialized resource. Gerry, Shannon Page and Scott, Andrea J., "Shark scavenging behavior in the presence of competition" (2010). Biology Faculty Publications. 23. Gerry, Shannon P. and Scott, Andrea J. 2010. Shark scavenging behavior in the presence of competition. Current Zoology 56(1): 100-108.
In 1983-1984, engineers and scientists at NASA’s Johnson Space Center (JSC), the Jet Propulsion Laboratory (JPL), and Science Applications, Inc. (SAI) performed a detailed Mars Sample Return (MSR) mission study. McDonnell Douglas Aerospace Corporation (MDAC) took SAI’s place on the team in the follow-on study that began in 1985. The 1984 study and its sequel were very different in tone; the first was optimistic about an MSR mission, while its 1986 follow-on questioned the desirability of any further MSR planning. The former was shaped by President Ronald Reagan’s ringing January 1984 call for NASA to build an Earth-orbiting Space Station, the latter by the January 1986 Space Shuttle Challenger accident, which triggered a sweeping reassessment of the U.S. space program. The 1984 study assumed that each MSR mission would need two Space Shuttle launches; one for the hefty MSR spacecraft and the other for a chemical-propellant Centaur G-prime upper stage that would launch the MSR spacecraft out of Earth orbit toward Mars. Centaur G-prime, a variant of the Centaur upper stage in use since the early 1960s, was designed specifically for launch in the Space Shuttle Orbiter’s 15-foot-wide, 60-foot-long payload bay. At the time of the Challenger accident, Centaur G-prime’s maiden flight was scheduled for May 1986. Had the accident not intervened, the first Centaur G-prime would have reached Earth orbit attached to the Galileo Jupiter orbiter and atmosphere probe on board Atlantis, NASA’s newest Orbiter. After departing Atlantis‘s payload bay, the stage would have ignited to boost Galileo out of Earth orbit toward Jupiter (image at top of post). The 1984 study’s MSR spacecraft and Centaur G-prime were to be brought together in orbit in either a Shuttle payload bay or a Space Station hangar. Spacecraft and upper stage would be launched separately because the 1984 MSR spacecraft would be too long and heavy for launch on board a Shuttle Orbiter with a Centaur G-prime attached. The 1986 study emphasized size and mass reduction with the aim of launching the MSR spacecraft and its Centaur G-prime stage into Earth orbit on a single Shuttle. This had become the study’s focus, the team explained, because the significance of being able to do the mission in a single shuttle launch has increased. The shuttle is much more expensive to launch than originally expected. . .Even for a large and relatively costly program such as Mars Sample Return, eliminating the expense of a second shuttle launch is significant. The relief to a tight launch schedule with a limited number of orbiters is significant as well. Despite the JPL/JSC/MDAC team’s efforts to keep up with changing times, its work was rendered obsolete even as it was completed. Citing safety considerations in the aftermath of the Challenger accident, NASA cancelled Centaur G-prime in June 1986, a month before the JPL/JSC/MDAC team’s MSR study report saw print. This left NASA planetary missions designed for Shuttle-Centaur G-prime launch with no means of reaching their destinations. Solid-propellant upper stages, planetary gravity assists, and expendable launch vehicles would subsequently replace the Shuttle-Centaur G-prime system in NASA’s planetary mission plans. Obsolescence should not, however, be confused with irrelevance. The 1986 study remains important as a step in the evolution of MSR planning in the 1980s, and it is illustrative of the forces shaping robotic planetary exploration in the same period. The 1984 MSR study had looked at eight mission design options before arriving at a baseline. The 1986 study arrived at four possible baseline mission designs, three of which showed promise for enabling the MSR spacecraft and Centaur G-prime to launch together on a single Space Shuttle. The 1986 study’s first plan, designated Option A1, was very similar to the 1984 study’s baseline option. A two-part “bent biconic” aeroshell would protect the MSR spacecraft during aerocapture, when the spacecraft skim through Mars’s atmosphere to slow down so that the planet’s gravity could capture it into Mars orbit. After aerocapture, the aeroshell aft section containing the Mars Orbiter Vehicle (MOV) and Earth Return Vehicle (ERV) would separate. The forward section (the Mars Entry Capsule, or MEC) would fire a rocket to slow down and drop into the atmosphere a second time so that it could aeromaneuver to its landing site. As it neared the site, the Mars Lander Module (MLM) would deploy a parachute and separate from the aeroshell, then would ignite rockets to descend to a soft landing. The 1986 study team’s Option A1 MSR spacecraft had an estimated mass to 8118 kilograms, or 1375 kilograms less than the 1984 baseline spacecraft. A Shuttle carrying a fully fueled Centaur G-prime could tote an additional 7800 kilograms into Earth orbit. The JPL/JSC/MDAC team admitted that Option A1 was “still somewhat too heavy for a single [Shuttle] launch,” and added that, unless “there are substantial technical breakthroughs, it is difficult to see how the mass can be reduced enough to bring it within the single launch range.” The team pointed out, however, that, unlike its 1984 counterpart, the Option A1 MSR spacecraft could fit into a Shuttle payload bay while attached to a Centaur G-prime. Furthermore, spacecraft and stage could reach orbit on board a single Shuttle if the latter were launched with a partial propellant load and “topped off” in orbit at the Space Station or by scavenging liquid oxygen/liquid hydrogen propellants left over in the Shuttle’s External Tank (ET). The latter option assumed that the Shuttle Orbiter would carry the ET into orbit; this would, however, represent a new capability, since normally the ET would be cast off just short of achieving orbital velocity. It also assumed that NASA would develop equipment for scavenging leftover ET propellants. The JPL/JSC/MDAC team’s second option, labelled Option B1, included the only MSR spacecraft light enough (7008 kilograms) to reach Earth orbit on board a Shuttle Orbiter attached to a fully fueled Centaur G-prime stage. The spacecraft would comprise two parts, each packed within a separate bent biconic aeroshell. The smaller aeroshell would carry the MOV and ERV, while the larger would contain the MEC. Upon arrival at Mars, the two aeroshells would separate. The MEC would dive directly into the martian atmosphere, aeromaneuver to its landing site, cast off its aeroshell, and land. The MOV/ERV, meanwhile, would perform aerocapture into Mars orbit. The team noted that packaging the two aeroshells to fit together inside the Shuttle Payload Bay and attaching them to the Centaur G-prime would demand a complex and heavy support structure. Because of this, Option B1, though “promising on paper,” had to be “viewed with some suspicion in terms of both volume and mass.” Option A2 was similar to the mission plan the twin Viking spacecraft followed in 1976. The MSR spacecraft would ignite a rocket engine to slow down so Mars’s gravity could capture it into orbit, then the MEC lander would separate from the MOV/ERV and fire a rocket to descend into the atmosphere, where, unlike the Vikings, it would aeromaneuver to reach its landing site. At 12,537 kilograms, the Option A2 MSR spacecraft was “by far the most massive of the lot.” With an attached fully fueled Centaur G-prime, it would far exceed the launch capability of a single Shuttle Orbiter. It would, the team reported, be “marginal” even if the attached Centaur G-prime were launched empty and fueled in Earth orbit. The team’s fourth and final option, designated B2, would be similar to the mission plan the Soviet Mars 2 and Mars 3 probes used for their failed landing missions in 1971. The MEC would separate from the MOV/ERV during final approach to Mars and enter the atmosphere directly. As in the other options, it would aeromaneuver to its landing site in a biconic aeroshell. The MOV/ERV, meanwhile, would fire a rocket and enter Mars orbit. The team judged that this concept, though heavier (8672 kilograms) than either Option A1 or B1, could “become very desirable because of the flexibility it allows.” The amount of propellant needed to place the Option B2 MOV/ERV into low circular Mars orbit might, for example, be dramatically reduced through aerobraking. In that scenario, the MOV/ERV would fire a rocket motor to slow down only enough so that Mars’s gravity would capture it into a loosely bound elliptical orbit. It would then skim through the planet’s upper atmosphere repeatedly over a period of weeks to lower and circularize its orbit. In recent years, Mars orbiters have employed this technique to reach their final mapping orbits; Mars Global Surveyor (MGS), which arrived in Mars orbit in September 1997, was the first. After a delay caused by a damaged solar array that threatened to buckle under the strain of aerobraking, MGS reached its mapping orbit in April 1999. The JPL/JSC/MDAC team added to all four of its MSR mission options its chief mass-saving technique: aerocapture at Earth. A 2.2-meter-long, 0.9-meter-wide biconic Earth Aerocapture Capsule (EAC) would replace the 1984 study’s propulsively braked Earth Orbit Capsule. The EAC would travel from Mars orbit to Earth’s vicinity inside a drum-shaped, 3.15-meter-long, one-meter-wide ERV with two solar panel “wings.” It would separate from the ERV and skim through Earth’s upper atmosphere at a height of about 70 kilometers to slow down. After leaving the atmosphere, it would discard its aeroshell to expose a solid rocket motor and solar cells (the latter would power a radio beacon that would aid recovery). When the EAC reached apoapsis (the high point in its orbit), it would fire its rocket to raise its periapsis (the low point of its orbit) above the atmosphere. In addition to saving propellant (hence mass), Earth aerocapture would place the Mars sample in a low circular orbit within reach of an Orbital Maneuvering Vehicle (OMV) remote-controlled from a Shuttle Orbiter or the Space Station. The JPL/JSC/MDAC team then described other mass-saving modifications to the 1984 MSR plan. First, it reduced the mass of the Sample Canister Assembly (SCA) by reducing the size and number of sample vials it could carry. The new SCA would pack 19 234-millimeter-long, 30-millimeter-diameter vials into a drum 0.4 meters in diameter and 0.5 meters long. The narrower, lighter SCA would mean that the 1986 Mars Rendezvous Vehicle (MRV) that would launch it into Mars orbit could be made smaller than its 1984 counterpart (4.8 meters long by 1.8 meters in diameter versus 5.37 meters by 1.84 meters). In a further departure from the 1984 study, the 1986 study’s sample-collecting rover would not carry the SCA; it would instead return to the MRV each time it filled a sample vial and transfer it to the SCA located there. The JPL/JSC/MDAC study team opted for this approach to help to ensure that at least a partial sample could reach Earth in the event of a rover failure before the SCA was filled. Upon arriving back at the lander, the rover would use its robot arm to place individual filled sample vials inside the SCA in the MRV. A robot arm on the MLM would provide redundancy; it would be capable of transferring the vials to the SCA if the rover’s arm malfunctioned, or it could collect a “grab” sample from close by the MLM if the rover failed to collect any samples. Unlike the 1984 MRV, which soon after arriving on Mars would pivot to point its dome-shaped nose at the sky, the 1986 MRV would remain horizontal until just before planned launch. This would enable the rover to load samples directly into the SCA in the recumbent MRV’s nose, eliminating the need for the 1984 MLM’s crane-like SCA Transfer Device. Because the 1986 MRV would be smaller, the MLM could also be smaller. This would permit a shorter, less massive MEC (8.1 meters long versus 12.2 meters in the 1984 design). The team also added a fourth landing leg to improve MLM stability. The 1986 team retained the Mars Orbit Rendezvous scheme of the 1984 study. The MRV would blast the SCA to Mars orbit, then the MOV/ERV would rendezvous and dock with MRV. The MRV would transfer the SCA automatically to the EAC within the ERV, then the MOV/ERV would cast off the MRV. The 1986 MOV would, the team reported, have an “unconventional” design. A compact assemblage of propellant and pressurant tanks affixed to a rectangular box would replace the 1984 MOV’s tidy hexagonal drum. This would reduce the MOV’s length from 4.5 meters to 2.8 meters. The ERV, with four solid-rocket motors for Mars orbit departure, would nest inside the box, further limiting length. Together these steps would contribute toward an MSR spacecraft design short enough to fit inside the Shuttle Orbiter Payload Bay while attached to a Centaur G-prime. The JPL/JSC/MDAC team concluded its report by proposing possible follow-on study areas. Before it did, however, it noted that Mars mission planning was “somewhat uncertain at the moment” because of the National Commission on Space (NCOS) planning effort. The NCOS exercise, led by former NASA Administrator Thomas Paine, was a congressionally mandated Reagan Administration effort aimed at giving NASA long-term goals. Pending completion of the NCOS report and “the official reaction” to its recommendations, the team wrote that it seems of little utility to indulge in yet another year of system studies of the Mars Sample Return mission, a subject that has already been most thoroughly studied. Until a strategy for Mars exploration becomes clear, such studies. . .may not be particularly useful. If the nation chooses to pursue. . .an early manned mission. . .there is little reason and, probably, inadequate time to carry out an unmanned sample return first. On the other hand, if a more deliberate pace is chosen, which pushes a manned [Mars] mission past the first decade of the next century, then the [MSR] mission is much more attractive. . . Mindful of this uncertainty, the team proposed that JPL work with JSC on strategies and technologies “supportive of both manned and unmanned Mars exploration.” and that JSC study piloted Mars missions and Mars sample collection and handling. It wrote that JPL study areas might include manufacture of propellants on Mars from resources found there, aerocapture/aeromaneuver analysis, laser ranging for Mars Orbit Rendezvous maneuvers, and rover guidance and navigation on Mars’s surface. The team cautioned, however, that these technology development activities would depend “upon a resolution of funding issues.” Six months after the JPL/JSC/MDAC MSR study report saw print, the NASA-sponsored Mars Study Team (MST) completed a report calling for an international Mars Rover Sample Return (MRSR) mission. The MST, which included many scientists who had participated in the 1984-1986 MSR studies, envisioned that the U.S. would contribute the mission’s sophisticated rover. Six months after that, the high-profile Ride Report threw a bright spotlight on MRSR. Though funding issues remained, the MRSR concept moved to the center of NASA planning for robotic Mars missions. Mars Sample Return Mission 1985 Study Report, JPL D-3114, James R. French, JPL Study Leader, and Douglas P. Blanchard, JSC Study Leader, NASA Jet Propulsion Laboratory, 31 July 1986. Beyond Apollo chronicles space history through missions and programs that didn’t happen. Comments are encouraged. Off-topic comments might be deleted.Go Back to Top. Skip To: Start of Article.
This Solving Linear Inequalities worksheet also includes: - Answer Key - Join to access all included materials Walk the class through the steps of how to evaluate linear inequalities in one variable and graph the solution set. Define and discuss key vocabulary terms, then have individuals work problems of varying difficulty. Included are word problems and compound inequalities.
How do we use piano chords ? Piano chords are groups of notes that are the foundation of the melody and harmony of a musical piece. All the notes in the group are played at the same time. A chord may be held for one or more bars or it may change within a bar, depending on the style of the song. The most important requirement is that chords must be in sync with the rhythm of the song. Position of the hands The best position for playing chords is that in which the wrists are almost horizontal and the hands are slightly cupped so that the finger tips touch the notes, fingernails down. This makes it easier to strike the notes all at once. What notes do we play? The basic chord is the tonic triad which consists of the first (root), third, and fifth notes. The ‘root’ identifies the key. For example: In the key of C The tonic triad will consist of C, E and G In the key of G The tonic triad will consist of G, B, and D In the key of F The tonic triad will consist of F, A and C. Are there chords other than the tonic triad? Yes. In the key of C, any of the notes can be altered to give a different sound. If the E is changed to E flat, we get a sad ‘minor key’ sound, also if the G is changed to G sharp, we say the chord has an augmented fifth, and a different sound. How do we know what chords to play ? Many people are able to play by ear through experience with chords. Others use music sheets to guide them through a song. After practicing with music sheets, they usually can play through the chord progressions by memory. 1. The music sheet can consist of piano notes and lyrics, but the chords are shown above the stave as seen below: In the above sheet, notice the line where the lyrics begin. You play C chord before the first syllable ‘it’s’, and where the lyrics say ‘funny’ it is time to strike another chord F maj7 on the syllable ‘fun…’ The next chord G/B is played before the word ‘this’, the next, ‘side’ coincides with E minor and so on. From sheet music we can accompany our singing assisted by the chords. Listen to how it sounds in the song by Elton John and hear him strike the chords at the right time. 2. If sheet music is not available, we can play using a sheet with just lyrics and the chords inserted exactly above where the chord is to be played. The first line contains the chords of the introduction and the first chord(C) of the first verse of this hymn starts on ‘Great’ , the next one(G) on the syllable ‘faith’ and so on. The chords of the verse are the same, and are played in the same progression so once the first verse and chorus are worked out, playing chords for the rest of the songs is very easy. 3. Lead sheets contain just melody line and chords and chord charts contain chords only, and these are sometimes preferred. Do you play only notes from sheet music or have you ever tried playing chords ? Have you ever tried to play chords, guided by music sheet, lead sheet, chord chart or by ear? Which method do you find easiest, and why ? Learn more: https://mypianohobby.com
July 11, 2007 EditionAlso in this issue... Summer is a great time for learningMelanie Spence Summer is here! A time to put school and learning to the side, right? Not exactly. Research says that one of the most powerful tools to help your child be a proficient reader is to have a wide range of schema, or background knowledge. Learning is built upon learning. Education experts would agree that what a child brings to the page is actually more important than the text itself. What can you do to build your child's background knowledge? A few ideas are listed below: - Take nature walks. A child can learn the most about science and nature by being immersed in it. - Splash! Children need to understand the difference between lakes, rivers, ponds, bays and oceans. Seeing pictures in a book can be helpful, but actually experiencing the water will be more memorable. - Travel. Travel by canoe, car, boat, train or plane. Each vessel brings with it new scenery. - Read everything! Advertisements, labels, magazines, how-to manuals and various books are excellent sources for analyzing points of view or purpose. - Talk. In the early years, a child's oral language development is the number one indicator of how well he/she will read. Having meaningful conversation with your child will help develop this essential skill. - Cook up a treat. Cooking and baking are life skills that involve reading, math and science. Sprinkle in a little history and art with various cultural recipes. - Play ball! Physical activity will not only benefit your child's health, but the brain learns the best when it balances new learning built upon background knowledge, emotion and physical movement. - Step into new territory. Look around your community and neighboring communities. Visit a new store, restaurant or church. - Lend a helping hand and listening ear. Visit the hospitals and nursing homes. Children can learn much from their elders. - Try a new skill. Dance, roller-skate, join a sport, plant a tree or plant a garden. Encourage your child to try anything ... at least once for the experience. Be assured that the experiences you create with your child today will be the connection for future learning. (Note: This series of columns will be written by different employees of the NEA Educational Cooperative to inform the public about the services provided by the co-op. Melanie Spence is the literacy specialist at the co-op.) News | Sports | Education | Viewpoints | Classifieds | Records Business Directory | Calendar | Community Directory | Contact Us | Staff | TD History | Archives | Subscribe Online Email the Online Editor: Email the Editor: P.O. Box 389 ~ Walnut Ridge, Arkansas 72476
The Baroque dance period applies to a period of approximately 100 years from around 1650. Originating in the court of the French king, Louis XIV, this new style of dancing was highly popular in fashionable society. Upon his restoration to the English throne in 1660, Charles II introduced these dances to the English court. The new Baroque style of dancing involved intricate footwork and complex floor patterns. Popular in theatrical settings, baroque dancing is the forerunner of the classical ballet. These dances were not easily learned and as a result, dancing masters were soon in great demand in European courts. It was not unusual that professionals were employed to dance alongside guests at social gatherings. Early in the seventeenth century, many dances were formally documented, for example in the Beauchamp-Feuillet notation, with later descriptions not only of the steps and accompanying arm movements, but also with explanations of the shape of the floor patterns and detailed instructions for fitting bends and risings, or jumps and landings, to the appropriate beats in the music. During the baroque dance period, social dances were often performed by one couple at a time with the orientation designed for optimal viewing by the highest ranking person known as “the presence”. Later in the century, these difficult dances were superseded by contredances (known as cotillon) that were expanded and simplified to accommodate more participants. As a result, the complex figures performed by a single couple might be replaced by four couples dancing in a square or in longways sets involving multiple couples. During the eighteenth century, numerous collections of dancing instructions were published along with the music, which enable us to interpret and perform these beautiful and stately dances three hundred years later.
The Caucasus Front or Caucasus Campaign is a term to describe the "contested armed frontier" between lands controlled by the Russian Empire and Ottoman Empire during WWI. In Russian historical literature, it is typically considered a separate theater of the Great War, whereas Western sources tend to view it as one of the campaigns of the Middle Eastern theatre. The front extended from Caucasus to Eastern Anatolia and Iran, reaching as far as Trabzon, Bitlis, Mus and Van in the west and Tabriz in the east. The land warfare was accompanied by the attacks of the Russian navy in the Black Sea Region of Ottoman Empire. Russian advance on the Caucasus front was halted in 1917 by the Russian Revolution, and the Russian forces at the front line were replaced by the forces of the newly-established Democratic Republic of Armenia (DRA), comprising of the Armenian volunteer units and the Armenian irregular units. Along with Germany, the Ottoman Empire signed the Treaty of Brest-Litovsk with Russia, formally recognizing the Ottoman control of Ardahan, Kars, and Batum. The subsequent brief war between the Ottoman Empire and the DRA resulted in the latter's defeat and the signing of the Treaty of Batum. However, the effects of this arrangement were voided few months later, when the Ottoman Empire accepted its own defeat in World War I by signing the Armistice of Mudros. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Lupus disease is classified as an autoimmune disease that affects many parts of the body, such as the skin, internal organs, and joints. Lupus disease is chronic, meaning that this disease usually lasts longer than 6 weeks and in most cases years. Lupus affects the immune system (the immune system fights off bacteria, germs, and viruses.) In normal conditions, our bodies have the ability to produce antibodies that protect our bodies against different types of foreign invaders. The term “autoimmune” is when the body’s immune system is unable to comprehend the difference between foreign and healthy matter within our bodies. The term “auto” means “self” in latin, which refers to the body producing antibodies that attack healthy organ tissue. This in turn damages many different parts of the body, and creates pain and inflammation in the process. Here are some informative facts concerning Lupus: - Lupus is known to be a disease of flares and remissions (switching between filling ill and feeling healthy). Lupus symptoms can range from mild to near death, and all forms should be treated by a doctor. With the proper treatment, those with Lupus disease can lead a relatively normal life. - Lupus disease isn’t contagious but any means whatsoever. - Lupus and cancer are totally unrelated. Cancer is a disease that is composed of abnormal, disease ridden tissue that grows and spreads quickly. Lupus is an autoimmune disease as previously discussed. - Lupus is unrelated to HIV (Human Immune Deficiency Virus) or AIDS (Acquired Immune Deficiency Syndrome). In HIV or AIDS the immune system is underactive while Lupus causes the immune system to be overactive. - Research states that approximately 1.5 million Americans have Lupus disease. The actual number may be higher or lower as there is insufficient data to prove how many people are living with Lupus disease. - It’s estimated that 5 million people around the globe have some form of Lupus. - Lupus is found mostly in women between the ages of 15-44, although children, men, and teenagers have been found to have Lupus also. - It has been found that women of color are 2-3 times as likely to have Lupus. - All races and ethnic groups can develop lupus. - At least 16,000 new cases of Lupus disease are reported each year across the U.S. The most common symptoms of Lupus are: - over sleeping and extreme fatigue - swollen/painful joints - anemia (low numbers of red blood cells or hemoglobin, or low total blood volume) - swelling (edema) in feet, legs, hands, and eyes - pain in chest on deep breathing (pleurisy) - butterfly-shaped rash across cheeks and nose - sun- or light-sensitivity (photosensitivity) - hair loss - abnormal blood clotting - fingers turning white and/or blue when cold (Raynaud’s phenomenon) - mouth or nose ulcers Lupus treatment depends of the form of Lupus and the symptoms involved. Those who have mild to moderate form of Lupus should see a rheumatologist. Rheumatologists are specialized in diseases of the muscles and joints. If the type of Lupus affects the internal kidneys or other internal organs, one will need to contact a Nephrologist. Nephrologists specialize in diseases of the renal system. If the Lupus disease causes rashes or lesions, a dermatologist is recommended for treatment. A dermatologist specializes in diseases that attack the skin (Including mouth and head.) Other specialists, such as a cardiologist, neurologist, or a perinatologist may be needed depending on which part of the body is affected by Lupus disease. These doctors are specialized in the heart, brain and nervous system, and high-risk pregnancies in women. For treating Lupus naturally, consider using; South African Pennywort, root of Tripterygium wilfordi (may cause sperm reduction and cessation of menstrual periods). For Homeopathic treatment, consider treatment using; Cistus Canadensis or Thuja for skin irritations, and Nux Vomica which has shown an 80% success rate in the treatment of Lupus. Extreme stress levels can cause autoimmune diseases to flare up. It’s beneficial to lead a relaxed lifestyle in order to alleviate Lupus symptoms. (Practice meditation, yoga, etc.) An interesting fact! Lady Gaga has recently revealed that she has a form of Lupus – she seems to be doing quite well for herself considering
IIT JEE/JEE Advanced All Engineering exam page CBSE Sample Papers Class 10 ICSE Sample Papers Class 10 View complete NRI section Amount of substance Express in base units Important:The following conventions are adopted while writing a unit.(1) Even if a unit is named after a person the unit is not written capital letters. i.e. we write joules not Joules.(2) For a unit named after a person the symbol is a capital letter e.g. for joules we write ‘J’ and the rest of them are in lowercase letters e.g. seconds is written as ‘s’. (3) The symbols of units do not have plural form i.e. 70 m not 70 ms or 10 N not 10Ns. (4) Not more than one solid’s is used i.e. all units of numerator written together before the ‘/’ sign and all in the denominator written after that. i.e. It is 1 ms-2 or 1 m/s-2 not 1m/s/s. (5) Punctuation marks are not written after the unit e.g. 1 litre = 1000 cc not 1000 c.c.It has to be borne in mind that SI system of units is not the only system of units that is followed all over the world. There are some countries (though they are very few in number) which use different system of units. For example: the FPS (Foot Pound Second) system or the CGS (Centimeter Gram Second) system.Dimensions The unit of any derived quantity depends upon one or more fundamental units. This dependence can be expressed with the help of dimensions of that derived quantity. In other words, the dimensions of a physical quantity show how its unit is related to the fundamental units.To express dimensions, each fundamental unit is represented by a capital letter. Thus the unit of length is denoted by L, unit of mass by M. Unit of time by T, unit of electric current by I, unit of temperature by K and unit of luminous intensity by C. Remember that speed will always remain distance covered per unit of time, whatever is the system of units, so the complex quantity speed can be expressed in terms of length L and time T. Now,we say that dimensional formula of speed is LT-1. We can relate the physical quantities to each other (usually we express complex quantities in terms of base quantities) by a system of dimensions.Dimension of a physical quantity are the powers to which the fundamental quantities must be raised to represent the given physical quantity. ExampleDensity of a substance is defined to be the mass contained in unit volume of the substance. Hence, [density] = ([mass])/([volume]) = M/L3 = ML-3 So, the dimensions of density are 1 in mass, -3 in length and 0 in time. Hence the dimensional formula of density is written as [ρ]= ML-3T0It is to be noted that constants such as ½ π, or trigonometric functions such as “sin wt” have no units or dimensions because they are numbers, ratios which are also numbers.Units and Dimensions are important from IIT JEE perspective. Objective questions are framed on this section. AIEEE definitely has 1-2 questions every year directly on these topics. Sometimes both IIT JEE and AIEEE do not ask questions on units and dimensions directly but they change units and involve indirect application. So it’s very important to master these concepts at early stage as this forms the basis of your preparation for IIT JEE and AIEEE Physics.At askIITians we provide you free study material on units and dimensions so that you get all the professional help needed to get through IIT JEE and AIEEE easily. AskIITians also provides live online IIT JEE preparation and coaching where you can attend our live online classes from your home! Multiplication of Vectors 1. Multiplication of... General Physics Topics Dimensions | Applications... Vector Components Components of a Vector: From the...
It’s an observation repeated loud and often by climate deniers: while Greenland melts like an ice lolly in the summer sun, Antarctica is staying chill. Parts of the coastline are actually gaining sea ice. According to a new scientific paper, there’s no conspiracy to be found here, but rather, a manifestation of global ocean currents. In fact, the waters surrounding Antarctica could be the last place on Earth to feel the burn of global warming. Like a vast conveyor belt, the ocean transports energy in the form of heat across our planet. In places like northwest Europe, currents acts as a radiator, delivering heat from elsewhere. In other parts of the world, the ocean behaves more like an AC unit, siphoning extra temperature away. In stable periods, ocean currents help regulate our planet’s climate, but when the climate is changing, this same conveyor belt can amplify differences. Case in point: the north and south poles. Nowhere is the impact of global warming more visible than the the Arctic, where ice is retreating further and further each summer. Meanwhile at the other end of the world, the vast Southern Ocean encircling Antarctica has barely warmed up at all. Even as scientists warn of the dire consequences of West Antarctic ice sheet collapse, sea ice has been growing along parts of the Antarctic coastline. A study appearing in this week’s Nature Geoscience may offer an explanation. Combining observational data from Argo floats and satellites with global circulation models, a team led by the University of Washington and MIT showed that as the surface of the Southern Ocean heats up, warm water is blown northward by currents. It’ll eventually end up around the north pole. At the same time, new water is entering the Southern Ocean from the coldest and deepest basins on Earth. This deep water represents the tail-end of an incredibly slow-moving oceanic conveyor belt that begins centuries earlier in the North Atlantic. In other words, the sea water encircling Antarctica hails from a time before the industrial revolution or human-caused climate change were even a thing. “The Southern Ocean is unique because it’s bringing water up from several thousand meters [as much as 2 miles],” lead study author Kyle Armour said in a statement. “You have a lot of water coming to the surface, and that water hasn’t seen the atmosphere for hundreds of years.” Armour stressed that his study did not directly address warming on the Antarctic ice sheet per se, which is most affected by the coastal waters lapping up directly on the shoreline. “The mechanism we’ve identified slows the warming of the open ocean around Antarctica—but not these coastal waters directly,” he told Gizmodo in an email. Indeed, continental Antarctica and much of the coastline have been warming at a rate more comparable to the rest of the world. The growth of sea ice surrounding Antarctica, however, may be facilitated by Armour’s mechanism of deep cold water. He’s also investigating other factors that could be contributing. “So far, we think [sea ice expansion] may be related to changes in the winds around Antarctica, which have been linked to stratospheric ozone depletion,” he said. The study adds to a growing stack of research demonstrating the power of the oceans to shuffle the heat from human-cause climate change around, in some ways masking its impacts. But as our technology grows more sophisticated, so does our ability to shine a light in the darkest corners of our planet. As we do, it’s becoming clear that no places are going to be immune to what’s happening in the atmosphere.
This Handling Data: African Animal Maths lesson plan also includes: Handling and processing data is a big part of what real scientists do. Provide a way for your learners to explore graphs and data related to the animals that live on the African savannah. They begin their analysis by discussing what they know about animals of the savannah, they complete a worksheet, and then use a fact sheet to fill in a data table. They use their tables to construct bar charts on the computer.
Chapter 2: Semiconductor Fundamentals Recombination of electrons and holes is a process by which both carriers annihilate each other: electrons occupy - through one or multiple steps - the empty state associated with a hole. Both carriers eventually disappear in the process. The energy difference between the initial and final state of the electron is released in the process. This leads to one possible classification of the recombination processes. In the case of radiative recombination, this energy is emitted in the form of a photon. In the case of non-radiative recombination, it is passed on to one or more phonons and in the case of Auger recombination it is given off in the form of kinetic energy to another electron. Another classification scheme considers the individual energy levels and particles involved. These different processes are further illustrated with Figure 2.8.1. |Figure 2.8.1 :||Carrier recombination mechanisms in semiconductors| Band-to-band recombination occurs when an electron moves from its conduction band state into the empty valence band state associated with the hole. This band-to-band transition is typically also a radiative transition in direct bandgap semiconductors. Trap-assisted recombination occurs when an electron falls into a "trap", an energy level within the bandgap caused by the presence of a foreign atom or a structural defect. Once the trap is filled it cannot accept another electron. The electron occupying the trap, in a second step, moves into an empty valence band state, thereby completing the recombination process. One can envision this process as a two-step transition of an electron from the conduction band to the valence band or as the annihilation of the electron and hole, which meet each other in the trap. We will refer to this process as Shockley-Read-Hall (SRH) recombination. Auger recombination is a process in which an electron and a hole recombine in a band-to-band transition, but now the resulting energy is given off to another electron or hole. The involvement of a third particle affects the recombination rate so that we need to treat Auger recombination differently from band-to-band recombination. Each of these recombination mechanisms can be reversed leading to carrier generation rather than recombination. A single expression will be used to describe recombination as well as generation for each of the above mechanisms. In addition, there are generation mechanisms, which do not have an associated recombination mechanism, such as generation of carriers by light absorption or by a high-energy electron/particle beam. These processes are referred to as ionization processes. Impact ionization, which is the generation mechanism associated with Auger recombination, also belongs to this category. The generation mechanisms are illustrated with Figure 2.8.2. |Figure 2.8.2 :||Carrier generation due to light absorption and ionization due to high-energy particle beams| Carrier generation due to light absorption occurs if the photon energy is large enough to raise an electron from the valence band into an empty conduction band state, thereby generating one electron-hole pair. The photon energy needs to be larger than the bandgap energy to satisfy this condition. The photon is absorbed in this process and the excess energy, Eph - Eg, is added to the electron and the hole in the form of kinetic energy. Carrier generation or ionization due to a high-energy beam consisting of charged particles is similar except that the available energy can be much larger than the bandgap energy so that multiple electron-hole pairs can be formed. The high-energy particle gradually loses its energy and eventually stops. This generation mechanism is used in semiconductor-based nuclear particle counters. As the number of ionized electron-hole pairs varies with the energy of the particle, one can also use such detector to measure the particle energy. Finally, there is a generation process called impact ionization, the generation mechanism that is the counterpart of Auger recombination. Impact ionization is caused by an electron/hole with an energy, which is much larger/smaller than the conduction/valence band edge. The detailed mechanism is illustrated with Figure 2.8.3. |Figure 2.8.3:||Impact ionization and avalanche multiplication of electrons and holes in the presence of a large electric field.| The excess energy is given off to generate an electron-hole pair through a band-to-band transition. This generation process causes avalanche multiplication in semiconductor diodes under high reverse bias: As one carrier accelerates in the electric field it gains energy. The kinetic energy is given off to an electron in the valence band, thereby creating an electron-hole pair. The resulting two electrons can create two more electrons which generate four more causing an avalanche multiplication effect. Electrons as well as holes contribute to avalanche multiplication. 2.8.1. Simple recombination-generation model A simple model for the recombination-generation mechanisms states that the recombination-generation rate is proportional to the excess carrier density. It acknowledges the fact that no net recombination takes place if the carrier density equals the thermal equilibrium value. The resulting expression for the recombination of electrons in a p-type semiconductor is given by: and similarly for holes in an n-type semiconductor: where the parameter t can be interpreted as the average time after which an excess minority carrier recombines. We will show for each of the different recombination mechanisms that the recombination rate can be simplified to this form when applied to minority carriers in a "quasi-neutral" semiconductor. The above expressions are therefore only valid under these conditions. The recombination rates of the majority carriers equals that of the minority carriers since in steady state recombination involves an equal number of holes and electrons. Therefore, the recombination rate of the majority carriers depends on the excess-minority-carrier-density as the minority carriers limit the recombination rate. Recombination in a depletion region and in situations where the hole and electron density are close to each other cannot be described with the simple model and the more elaborate expressions for the individual recombination mechanisms must be used. 2.8.2. Band-to-band recombination Band-to-band recombination depends on the density of available electrons and holes. Both carrier types need to be available in the recombination process. Therefore, the rate is expected to be proportional to the product of n and p. Also, in thermal equilibrium, the recombination rate must equal the generation rate since there is no net recombination or generation. As the product of n and p equals ni2 in thermal equilibrium, the net recombination rate can be expressed as: where b is the bimolecular recombination constant. 2.8.3. Trap assisted recombination The net recombination rate for trap-assisted recombination is given by: The derivation of this equation is beyond the scope of this text. This expression can be further simplified for p >> n to: and for n >> p to: 2.8.4. Surface recombination Recombination at surfaces and interfaces can have a significant impact on the behavior of semiconductor devices. This is because surfaces and interfaces typically contain a large number of recombination centers because of the abrupt termination of the semiconductor crystal, which leaves a large number of electrically active states. In addition, the surfaces and interfaces are more likely to contain impurities since they are exposed during the device fabrication process. The net recombination rate due to trap-assisted recombination and generation is given by: This expression is almost identical to that of Shockley-Hall-Read recombination. The only difference is that the recombination is due to a two-dimensional density of traps, Nts, as the traps only exist at the surface or interface. This equation can be further simplified for minority carriers in a quasi-neutral region. For instance for electrons in a quasi-neutral p-type region p >> n and p >> ni so that for Ei = Est, it can be simplified to: where the recombination velocity, vs, is given by: 2.8.5. Auger recombination Auger recombination involves three particles: an electron and a hole, which recombine in a band-to-band transition and give off the resulting energy to another electron or hole. The expression for the net recombination rate is therefore similar to that of band-to-band recombination but includes the density of the electrons or holes, which receive the released energy from the electron-hole annihilation: The two terms correspond to the two possible mechanisms. 2.8.6. Generation due to light Carriers can be generated in semiconductors by illuminating the semiconductor with light. The energy of the incoming photons is used to bring an electron from a lower energy level to a higher energy level. In the case where an electron is removed from the valence band and added to the conduction band, an electron-hole pair is generated. A necessary condition is that the energy of the photon, Eph,is larger than the bandgap energy, Eg. As the energy of the photon is given off to the electron, the photon no longer exists. If each absorbed photon creates one electron-hole pair, the electron and hole generation rates are given by: where a is the absorption coefficient of the material at the energy of the incoming photon. The absorption of light in a semiconductor causes the optical power to decrease with distance. This effect is described mathematically by: The calculation of the generation rate of carriers therefore requires first a calculation of the optical power within the structure from which the generation rate can then be obtained using (2.8.12). |Example 2.11||Calculate the electron and hole densities in an n-type silicon wafer (Nd = 1017 cm-3) illuminated uniformly with 10 mW/cm2 of red light (Eph = 1.8 eV). The absorption coefficient of red light in silicon is 10-3 cm-1. The minority carrier lifetime is 10 ms.| The generation rate of electrons and holes equals: where the photon energy was converted into Joules. The excess carrier densities are then obtained from: The excess carrier densities are then obtained from: So that the electron and hole densities equal: Boulder, August 2007
Compared to our other senses, scientists don't know much about how our skin is wired for the sensation of touch. Now, research reported in the December 23rd issue of the journal Cell, a Cell Press publication, provides the first picture of how specialized neurons feel light touches, like a brush of movement or a vibration, are organized in hairy skin. Looking at these neurons in the hairy skin of mice, the researchers observed remarkably orderly patterns, suggesting that each type of hair follicle works like a distinct sensory organ, each tuned to register different types of touches. Each hair follicle sends out one wire-like projection that joins with others in the spinal cord, where the information they carry can be integrated into impulses sent to the brain. This network of neurons in our own skin allows us to perceive important differences in our surroundings: a raindrop versus a mosquito, a soft fingertip versus a hard stick. "We can now begin to appreciate how these hair follicles and associated neurons are organized relative to one another and that organization enables us to think about how mechanosensory information is integrated and processed for the perception of touch," says David Ginty of The Johns Hopkins University School of Medicine. Mice have several types of hair follicles with three in particular that make up their coats. Ginty's team made a technical breakthrough by coming up with a way to label distinct populations of known low-threshold mechanoreceptors (LTMRs). Before this study, there was no way to visualize LTMRs in their natural state. The neurons are tricky to study in part because they extend from the spinal cord all the way out to the skin. The feeling in the tips of our toes depends on cells that are more than one meter long. The images show something unexpected and fascinating, Ginty says. Each hair follicle type includes a distinct combination of mechanosensory endings. Those sensory follicles are also organized in a repeating and stereotypical pattern in mouse skin. The neurons found in adjacent hair follicles stretch to a part of the spinal cord that receives sensory inputs, forming narrow columns. Ginty says there are probably thousands of those columns in the spinal cord, each gathering inputs from a particular region of the skin and its patch of 100 or so hairs. Of course, we don't have hair like a mouse, and it's not yet clear whether some of these mechanosensory neurons depend on the hairs themselves to pick up on sensations and whether others are primarily important as scaffolds for the underlying neural structures. They don't know either how these inputs are integrated in the spinal cord and brain to give rise to perceptions, but now they have the genetic access they need to tinker with each LTMR subtype one by one, turning them on or off at will and seeing what happens to the brain and to behavior. Intriguingly, one of the LTMR types under study is implicated as "pleasure neurons" in people, Ginty notes. At this point, he says they have no clue how these neurons manage to set themselves up in this way during development. The neurons that form this sensory network are born at different times, controlled by different growth factors, and "yet they assemble in these remarkable patterns." And for Ginty that leads to a simple if daunting question to answer: "How does one end of the sensory neuron know what the other end is doing?"
An introduction to the phonemic chart - taken from the November 2000 Newsletter The phonemic chart has been around for quite a while & in the not too distant past the chart constituted the sum of pronunciation in the classroom. We used to spend lots of time teaching the sounds & trying to iron out difficulties with minimal pairs. In recent times we have come to realise that phonemics is only a part of pronunciation development. Phonology can be seen in terms of suprasegmental aspects & segmental aspects. The former includes intonation & the latter, the bits, the individual sounds. In terms of communicative effectiveness the suprasegmental aspects are vital for successful communication - get the intonation wrong & there could be a breakdown. As to the segmental side of things, the context will probably sort out any sound problems. So the time we used to spend on sounds has diminished & increased on intonational aspects because of this recognition of what is more useful. That's the theory anyway. I suspect that not much has really changed. Working with sounds is relatively easy when compared with working on intonation - there are some that say that you can't teach intonation. It is difficult as much of it is very much context-bound & intuitive. The phonemic chart is easily definable & teachable - safe for both the learner & the teacher. Moreover there are still very solid reasons for dealing with the phonemic - it helps students perceive the differences between sounds - it helps in the overall awareness of phonology - it helps the teacher anticipate some problems - it helps when used as a reference for correction - it helps with sound/spelling difficulties - it is a valuable study aid used in dictionaries & coursebooks thereby encouraging learner independence - it helps with the recording of vocabulary In this section so far is the phonemic chart, a key to the different sounds & a chart with the voice & unvoiced sounds marked. There are also two linked pages of phonemic activities, some of which are mentioned below, as well as a page which gives an introduction to some features of sounds in combination. It took me a long time to get around to learning the chart. After several years of existing in the classroom guiltily without it, I was fortunate enough to have someone to teach it to me. I found that I didn't need to remember the word that highlighted the sound as it was easier to learn the sounds in relation to each other. I have put example words below the chart on the site for those of you who are on your own but if you do have a colleague who knows the sounds then get them to teach you. It shouldn't take long. Another approach is to learn the sounds gradually as you introduce them to your students. Here are a few sounds gradually. If you go straight into teaching the whole chart you'll overload the students, demotivate them & put them off any future development. Begin with the schwa & expand with the monophthongs as they crop up in different contexts. With a beginner class on day one you can introduce the schwa - highlight it in the vocabulary you introduce & work on production from the start. Use the easily identifiable consonant sounds in conjunction with the vowel sounds. Review the sounds you have covered with short warmer, filler & cooler activities. 2.Work on recognition first - the students have to be able to actually hear the sound. Then move to discrimination so they can tell the difference between the sound you are looking at & other similar sounds. After this you can safely move to production. The message here is a lot of listening. 3.Use mouth visuals to show what is happening when the sounds are made. You can use your hands or pictures to show what is happening to the tongue & lips. to remind your students of a sound e.g. mime showing a baby in your arms for the sound 5.Use the sounds & the phonemic chart as a reference for corrective work & be aware that your students are going to have problems & that it will take time to overcome them. Explain this to your students & ask them to be patient. voiced & unvoiced distinction early on. Here are three ways to help students differentiate: a. Put your hands over your ears & say the sounds. You'll hear the voiced sounds. b. Put your hand on your throat. You'll feel a vibration with the voiced sounds. c. Put a piece of paper in front of your mouth & you'll see it move with the unvoiced sounds. This distinction will then be useful for discrimination & correction & also if you want to introduce the plural & 3rd person singular ending rule & the past tense ending rule - see the sounds' activities 7.Work on the features of sounds in combination as they crop up in context - weakening, linking, intrusion, elision & assimilation. You can find their definitions on the sounds in combination page. on sounds with other aspects of your lessons. For example, when teaching vocabulary, highlight difficult sounds, mark them under the word & get your students to copy them down. If the students cannot see the relevance then they will lose use. If the sounds are known to your students they can be fairly autonomous with new vocabulary with a good dictionary. Don't forget to teach them how to use the dictionary effectively. Above all, make learning the sounds fun & don't take it too seriously! A few links connected For a directory of where you can get the phonemic symbol fonts to install on your computer. The web site of the International Phonetic Association The Phonemic Inventory of Modern Standard Vulcan A Vulcan Academy Linguistics Department Web Booklet A run through of all the sounds together & in isolation. For each sound there is a mouth & lip diagram, a recording of the sound, recordings of the sound in isolated words. For teachers & students alike to get to grips with the sounds. Check out the 'New Randomizer' - minimal pair man/men Lots of activities on sounds at this mammoth site. IATEFL Pronunciation SIG - good for links & there's a collection of articles on phonology. A couple of recommended Foundations - Adrian Underhill (Longman) A teacher awareness book that takes a systematic approach & has lots of practical ideas. Clearly - Rogerson & Gilbert (CUP) More for the learner & very well built up in simple & clear the phonemic chart page the sound activities the sounds in combination page To the phonology index
GRADE 6 RATIOS AND PROPORTIONAL RELATIONSHIPS - Reasoning with ratios involves attending to and coordinating two quantities. - A ratio is a multiplicative comparison of two quantities, or it is a joining of two quantities in a composed unit. - Forming a ratio as a measure of a real-world attribute involves isolating that attribute from other attributes and understanding the effect of changing each quantity on the attribute of interest. - A number of mathematical connections link ratios and fractions: - Ratios are often expressed in fraction notation, although ratios and fractions do not have identical meaning. - Ratios are often used to make "part-part" comparisons, but fractions are not. - Ratios and fraction can be thought of as overlapping sets. - Ratios can often be meaningfully reinterpreted as fractions. - Ratios can be meaningfully reinterpreted as quotients. - A proportion is a relationship of equality between two ratios. In a proportion, the ratios of two quantities remains constant as the corresponding values of the quantities change. - Proportional reasoning is complex and involves understanding that - - Equivalent ratios can be created by iterating and/or partitioning a composed unit: - If one quantity in a ratio is multiplied or divided by a particular factor, then the other quantity must be multiplied or divided by the same factor to maintain the proportional relationship; and - The two types of ratios - composed units and multiplicative comparisons - are related. - A rate is set of infinitely many equivalent ratios. - Several ways of reasoning, all grounded in sense making, can be generalized into algorithms for solving proportion problems. - Superficial cues present in the context of a problem do not provide sufficient evidence of proportional relationships between quantities. Lobato, J.E. (2010). Developing Essential Understanding of Ratios, Proportions & Proportional Reasoning for Teaching Mathematics in Grades 6 - 8. Reston, VA: The National Council of Teachers of Mathematics, Inc.
Coal has no fixed chemical formula and is therefore classified as sedimentary rock. The major elements of coal are silicon, calcium, aluminum, iron and magnesium while the minor elements are potassium, sodium, titanium, phosphorus, chlorine and manganese. Geologists estimate it took five to eight feet of peat to make one foot of coal. Coal deposits vary from a few inches to several hundred feet with most beds being 2.5 to eight feet in depth. Occasionally, the earth's crust buckled or folded and increased the pressure and heat beneath the surface. This increase of pressure formed higher grades or better quality of coal. Each grade of coal was rated by A being the best grade or quality in the category followed by B, C, etc. There are four main classes of coal as follows: 1) Anthracite is the hardest coal and lies deepest in the earth. Most of it is found in Pennsylvania. It has the highest percent of carbon and the least percent of water causing no smoke when it burns, and therefore has less heating value. 2) Bituminous is soft coal, from the Cretaceous Period, and is half of the world's supply of coal. 3) Subbituminous is a softer coal, from the Tertiary or Late Cretaceous Period, and contains 25 percent water. 4) Lignite is the hardest type of brown coal containing 50 percent water, and is therefore a soft coal. The government started analyzing the coal once it became a major source of energy in the United States. They analyzed the moisture, volatile matter, fixed carbon, ash, sulfur and British Thermal Units (BTU). Good quality coal should have 12 to 16 percent moisture content. If the percent is too low and the coal becomes wet, this can cause spontaneous combustion. If the percent is higher than 16, it can lessen the BTU value. The British Thermal Unit (BTU) is the ability the coal has to produce heat. The higher the BTU is the better the coal quality. The fixed carbon rating is the amount of carbon available for burning. The higher the rating is the better the coal quality. The volatile matter is the gas and oil in the coal that is available to burn. The higher the rating is the better the coal quality. The ash content is how many ashes and clinkers are left after the coal has burned. The less ash there is the better the coal quality. There should be less than one percent sulfur in the coal because the sulfur when it is burned produces the pollutant, sulfur dioxide. The lower the sulfur content is the better the coal quality. The Green River Coal Region is the largest in Wyoming with 16,800 square miles containing 237,110 million tons of coal. It contains sub-bituminous C to high volatile C bituminous. Bituminous and high ranking sub-bituminous coal moisture is less than 15%, volatile matter content is 30 to 40% and fixed carbon content is greater than 40%. The lower ranked sub-bituminous Tertiary coals have moisture at 20 to 30%, as well as volatile matter and fixed carbon. Ninety-nine percent of Wyoming's coal contains less than 1% sulfur, but the highest sulfur coal is in the Green River Coal Region. The Wasatch Formation coals not being mined have 7% sulfur. Western Wyoming coal heat values average 9,600 BTU/pounds and Wyoming's average is 12,000 BTU/pounds. Coal bearing rock in the Green River Region is largely concealed by younger rock and little is known about the total coal resources in the region particularly in the upper Green River Valley area. Coal beds occur in the Mesaverde and Lance formations of the Upper Cretaceous, the Fort Union of the Paleocene and Wasatch Formation of the Eocene age. The Hams Fork Coal Region is the fifth largest in Wyoming with 49,160 million tons of coal. It is in the Overthrust Belt which is underlain by coal bearing rock from the Bear River Formation in the Lower Cretaceous and the Frontier, Blind Bull and Adaville formations in the Upper Cretaceous while the Evanston formation is in the Paleocene Age. The Overthrust Belt is part of a larger Tectonic Province of the Cordilleran Belt from northern Alaska to southern Mexico. The Cordilleran Fold Belt is divided longitudinally into at least nine segments or salients. The Idaho-Wyoming-Northern Utah salient is 200 miles long, arcuate, with an easterly convex belt of faulted and folded rocks bordered on the north by the Snake River Plain and on the south by the Uinta Uplift. The east is defined by Cliff Creek-Prospect-Darby Fault and the western boundary is the Wasatch Front. The Overthrust Belt is folded Paleozoic and rocks with the younger Cretaceous and Tertiary rocks of the area resting uncomfortably on top of these long, narrow belts bound by major thrust faults or eroded limbs of folds. The coal bearing rocks usually dip westward 16 to 80 degrees with an average of 25 to 30 degrees. The Blind Bull or Vail coal bed has high volatile bituminous coal up to 10.2 feet thick. Most mines mined six to 7.5 feet of low ash and sulfur coal beds ten miles long. The analysis of the coal shows 7.4% moisture, 39.4% volatile matter, 47.5% fixed carbon, 5.7% ash, .6% sulfure and 12,210 BTU/pounds. The Jackson Hole Coal Field is 700 square miles with 6,340 million tons of coal. The coal occurs in the upper Cretaceous, Paleocene and Eocene age rocks. The coal is probably sub-bituminous. Teton county has mined 0.01 million tons of coals while Sublette County has mined 0.02 million tons. Humans have mined coal for centuries. During the Bronze Age, 3,000 to 4,000 years ago, people of Glamorganshire, Wales used coal to burn their dead. The Chinese were using coal in 1100 B.C. while the Greeks used coal several hundred years before Christ. The Pueblo Indians used coal for their pottery making much earlier in the southwest than the white settlers used coal. Coal was first discovered by the white settlers in the United States during 1673, 80 miles southwest of Chicago along the Illinois River, but was not mined until the 1740's in Virginia. John C. Fremont found "alternating beds of coal and clay" just east of what became the Cumberland townsite in southwest Wyoming on August 19, 1843. In 1859, a coal mine was established three miles east of the old Bear River City as a Military Coal Reservation for the blacksmith at Fort Bridger. There are several types of coal mines, but the type used in the Upper Green River Valley area is the Drift Mine. It is used to reach coal in hillsides. The main entrance is located where the coal is exposed and the tunnel is dug farther back into the bed of coal. "Shooting-of-the-solid" is coal being blasted off the solid bed without undercutting, or the mine face near the bottom of the coal bed is dug out so coal will fall down. A dangerously large explosive charge is needed and produces the dust and fine coal. Short holes are drilled at intervals along the face. The explosives are inserted in these holes. Black powder and dynamite were once used as chief explosives, but have a tendency to set fire to the gases and dust so they were discarded as too dangerous. Permissible powder replaced the black powder and dynamite and was approved by the U.S. Bureau of Mines, which is no longer in existence. Sometimes to make the mine safer by minimizing the possibility of the coal igniting, the mine was rockdusted by pulverizing limestone, a nonexplosive matter, so it could be sprayed on the roof, walls and floor of the mine. The tipple is where the coal is cleaned and sized. There are five sizes of coal which are lump, egg, nut, pea and slack. The lump is the largest ranging from the size of a man's fist to two or three feet in diameter. Egg is the next size and is about the size of any egg. Nut is the third size being the size of any shelled nut. Pea is the fourth size and it is the size of the vegetable pea. The smallest size is slack, which is tiny, fine pieces with a great deal of dust. The size of coal bought depends on what kind of heating unit the coal is going to be used in to produce heat. The coal miners used carbide lights attached to their canvas and leather miner caps in the beginning to see in the mines. They put carbide in the light and water dripped on the dry carbide producing a gas which could be ignited producing the light needed to see. After batteries were developed in the 1940's, the miners used battery run lights attached to their hard hats. One old timer, Vince Guyette, called these mines "gopher-hole wagon mines." In the late 1800's and very early 1900"s, many of the mines were for anyone to use. An individual would go to a mine and dig his own coal. He did not have to pay for the coal, just use his muscles. As time went on and the mines developed, a small amount was charged by the mine owner such as in 1906, Julius Sayles, owner of Viola Coal Mine, was charging $1.50 to $2.00 per ton. For many of the ranchers, it was a long distance to the coal mine with a team and wagon. Many were 20 to 40 miles from the mine with some being as much as 50 miles away. The ranchers or business men who came for coal were fed a meal by the wife of the mine family, and many times they stayed the night before starting home with a load of coal. If they did not stay at the mine, they would spend the night with ranchers along the way. This all made the coal mines fit into the neighbor exchange system to survive along with helping a neighbor brand, gather cattle, build a house, or anything else that needed to be done so people in this very isolated, mountain valley could survive. The ranchers needed the coal to run their forges which kept the ranch operating. Many of the ranchers were 30 to 40 miles from town and 100 miles from a railroad, and therefore needed to be able to fix and build needed equipment. As time went on people from the small communities in the area would pay for coal to be hauled to their home or business for heating fuel. When the oil boom in the 1900's took place around LaBarge and Big Piney, the coal was bought to heat the boilers fueling the rigs to drill for oil. As the oil and gas industry developed, it replaced coal as the energy source. The mines eventually closed when oil and gas replaced coal as the main energy source. Furthermore, when the highways were built, it was easier to go to Rock Springs for coal rather than to isolated mines with rough, dirt roads accessing them. According to Gardner, Johnson and Allen in The Twitchell Mine: A Historical Overview, the following four factors influenced the growth and development of the wagon mines: 1) The lower elevations of Wyoming had little wood, but had an abundance of coal for fuel. 2) There was little "hard cash" in the late 1800's and early 1900"s and it took only hard work to get the coal. Few people filed on mining claims so to not have to pay Uncle Sam. 3) Homesteads were a long way from the railroad where coal could be obtained readily. 4) Ambitious ranchers who wanted more income would develop a coal mine.
At Harvard’s SEAS laboratory, Professor Robert Wood and his colleagues are making vast advances in the field of micro air vehicles. Their latest project, the Robobee, seeks to incorporate bio-inspired design with a state-of-the-art navigation system that allows for autonomous flight and coordinated work with other Robobees. Their team, which includes evolutionary biologists, engineers and computer scientists, implements many of the tricks that the bee uses to create lift in unsteady aerodynamic conditions in which classical aerodynamics struggles. Although the team has done incredible work with swarm coordination and a new pop-up-book inspired fabrication technique, the real interest for bioaerial locomotion is the bee-inspired design. Bees, which measure anywhere from a few millimeters to nearly four centimeters, must overcome many aerodynamic challenges in order to fly. As Wood’s team soon found out, the unsteady flows and irregularity of external flow play a crucial role in the flight of a vehicle of such a small size. Stacey Combes and Robert Dudley discovered that one way that bees overcome this challenge is by extending their hind legs when they encounter turbulent flows. This improves their roll stability but increases body drag creating a need for 30% more power to maintain flight. This is one way in which flight speed is limited in bees. Based on the design pictures for the Robobee, it appears that the engineers are addressing this issue with the use of halteres. These small structures posterior of the wings are present in some two-winged insects and function as small gyroscopes to maintain balance. They accomplish this by flapping up and down like the wings to create additional stability. The inspiration does not stop there. Foremost in flight and design are the wings. Though scientists understand the fluid mechanics of fixed-wing aircrafts, they are only just beginning to realize the complex aerodynamics of unsteady flows and flight tricks of bird and insect flight. From a mere visual analysis of bee flight we can draw many observations. Firstly, bee’s wings seem to flap through approximately a ninety-degree angle rather than the large swooping flaps of other insects and birds. The wing beat frequency is also incredibly high at 230 times per second. Michael Dickinson in his work at Caltech analyzed the bee’s flight pattern and found that bees were able to create enough lift through “the unconventional combination of short, choppy wing strokes, a rapid rotation of the wing as it flops over and reverses direction, and a very fast wing-beat frequency”(1). As the wing flaps up and down, it undergoes passive deformation that allows the camber of the wing to change with the up and downstroke. One can see this clearly in the following videos: A team of researchers from Harvard found that as a result of passive deformations one should expect “instantaneous changes in aerodynamic properties” during flapping. Consequently, Wood and his team decided to mimic bee wings because the bee is capable of achieving remarkable flight performance in the unsteady conditions that it operates in. However, the bee’s flight mechanism is not perfect. Dickinson found that bees control their power output by the arc length of their wing stroke. Bees would be more aerodynamically efficient if they changeed the rate of wing stroke rather than the arc length of the wing stroke. This is an improvement that the Robobee team will make full use of. Additionally the size and shape of the bees wing offers a beautiful and helpful example of wing structure, but it is not perfect, yet. In a recent paper by Wood and his colleagues, they found that the propulsive efficiency and control could be improved by implementing active deformation of the wings. By embedding micro-actuators, pressure distribution over the wings could be more closely controlled and aerodynamic efficiency of the wings could potentially be increased. Wood’s team has begun to apply many of these discoveries. The current prototype uses biomimetic wings on a rigid upright platform. A suitable power supply has not yet been manufactured nor has gyroscopic stabilization been implemented though Wood’s team has plans to do so. Nonetheless, the Robobee’s structural skeleton is an effective base which they can complement with the control systems for flight and coordination. As it stands today, the Robobee team has high hopes and lofty goals for their micro air vehicle. Though the Robobee prototype is still tethered to the earth, Wood and his team have identified and begun solving many of the challenges that this project faces. Their bee-inspired design has given them a valuable basis for their work from which that hope to learn from insect flight stabilization and the bee’s wing design and deformation. Looking forward, the team hopes that the Robobee may one day work as a swarm to execute reconnaissance missions, monitor environmental change, and even replace real bees by pollinating flowers and crops. - Please click on all the hyperlinks in the text for great insight into other aspects of the work and more detailed descriptions - A great podcast from the museum of science giving an overview of the work - Introduction to new research and interest in MAV - The US Air Force’s introduction to MAV’s (a bit disturbing) “Bees” on Wikipedia Combes and Dudley, Turbulence Driven Instabilities Limit Insect Flight Performance, OEB at Harvard (2009) Dickinson, Deciphering the Mystery of Bee Flight, CalTech (2005) Bee Flight Mechanics from NewScientist Shang, Combes, Finio, and Wood, Artificial Insect Wings of Diverse Morphology for Flapping-Wing Micro Air Vehicles, OEB at Harvard (2009)
For Immediate Release Thursday, June 11, 2009 Colorado State University Researchers Find That Atomic Vibrations Lead to Atom Manipulations FORT COLLINS - As electronic devices get smaller and the methods for small-scale manufacturing advance, scientists at Colorado State University are working to perfect even smaller structures by manipulating individual atoms. A new study from Colorado State University graduate student Byungsoo Kim and associate professor of mathematics Vakhtang Putkaradze reveals that scientists can extract and replace a single atom. Collaborating on the research is Professor Takashi Hikihara from the Department of Electrical Engineering at Kyoto University in Japan. By using the tip of an atomic force microscope - a device that resembles a long needle that probes atoms - researchers were able to show that an atom could be extracted from a lattice structure without damaging surrounding atoms. Further, the extracted atom could be deposited back into the hole that was created or where a neighboring atom once was located. "This is like putting nails in the wall and taking them out, only the nails are atoms, the wall is lattice of atoms and the tool is the tip of atomic force microscope," said Putkaradze. "This tool is billions of billions times larger than the nails. It is a bit like doing home improvement projects with a tool that is much larger than the house. We know that this tool can deposit the atoms in the lattice, so the force between the atom and the lattice must be stronger than the force between the tool and the atom." "On the other hand, we also know that with the same tool we can extract the atoms from the lattice, so the force between the atom and the tool must be stronger than the atom and the lattice," Putkaradze said. This puzzle was resolved by CSU-Kyoto team. "We showed that you can both take atoms out of the lattice and put them back," Putkaradze said. "This atomic construction is much easier on some levels than 'regular' construction because of atomic vibrations; the atoms will go in and get out all by themselves, with no force necessary, just by keeping the hammer close enough and long enough to the nail and you can either take the nail out or put it in." "The impact of this research could result in smaller, faster and more energy efficient electrical devices, such as, computers and cell phones," said Putkaradze. "There is the potential that current computers or cell phones could be 100 times faster as a result of smaller transistors and microchips. The devices would also be more energy efficient in the process." The extraction and deposition of single atoms using the atomic force microscope tip is also a promising technique for building nanostructures. Nanotechnology is the science of creating electronic circuits and devices that are designed and built from single atoms and molecules on a scale of nanometers. One nanometer is one billionth of one meter; the size of one human hair is about 1,000 nanometers. The study was published in the May 29, 2009, edition of Physical Review Letters.
Connolly, P., & Vilardi, T. Writing to Learn Mathematics and Science. New York: Teachers College Press, 1989. Connolly and Vilardi outline the ways in which language affects the math classroom. They argue the use of writing and an emphasis on language may help traditionally under−represented student populations improve their performance in the math classroom. This book is divided into six sections ranging from the practical application of using language to solve math problems to aligning programmatic outcomes with Writing Across the Curriculum. Countryman, J. Writing to Learn Mathematics: Strategies that Work. Portsmouth, NH: Heinemann, 1992. This book argues using writing in a mathematics classroom allows students to see math as something other than a set of pre−arranged formulas to memorize and gives them ownership over the subject matter. Practically, Countryman posits ways in which the classroom teacher can use various forms of writing like journal articles, autobiographies and learning logs to enhance students’ ability to understand mathematical concepts. Countryman provides examples of student writing from across grade levels to underscore her argument. Coker, F.H., & Scarboro, A. "Writing to Learn in Upper−Division Sociology Courses: Two Case Studies." Teaching Sociology. 18 (2): 218−222. In this short journal article, the authors at a small liberal arts college outline how emphasizing writing in their advanced sociology courses helps make students better learners as well as better sociologists. Hancock, Elise. Ideas into Words: Mastering the Craft of Science Writing. Baltimore: Johns Hopkins, 2003. In this relatively short book, Hancock covers what it means to become immersed in scientific scholarly conversations. Taking a step−by−step approach, Hancock discusses how to formulate research questions, conduct interviews and structure a scientific essay’s story in compelling ways. She also provides examples of scientific writing that maintains scientific rigor without an over−reliance on technical jargon. Locke, D.M. Science as Writing. New Haven: Yale University Press, 1992. As a former chemist−turned−English teacher, Locke attempts to alleviate the perception that science writing is a hyper−specialized repetitious event. He devotes a chapter to the rhetorical effect of science writing and compares the humorless, expressionless style of writing valued in certain publications with Barbara McClintock’s passionate rendering of the genome. He goes on to argue for the importance of story−telling when conveying a novel scientific worldview, using Einstein’s presentation of his relativity theory and others as examples. Useful Links for Electronic Writing Across the Curriculum The OWL at Purdue University (sponsored by Writing Center) provides sources for writers on drafting, revising, editing, citation, etc. It includes resources for teachers as well as guide sheets and activities for student writers. The site is accessible, and offers useful support for students who need more explanation or practice with concepts introduced in class. The WAC Clearinghouse includes teaching and scholarly resources for developing writing across the curriculum. Many handouts available for use in workshops, inquiry groups, etc. The University of Wisconsin-Madison WAC site provides a broad range of resources including suggestions for incorporating writing into courses (sequencing assignments in a syllabus, conference with students, response, etc.) and discipline specific examples from faculty. The writing site at Colorado State offers abundant resources for student writers and instructors. Resources are organized into “collections” around a particular topic. For example, if you need help teaching or engaging in the composing process, the “composing process” collection will give you access to several related resources including “development,” a collection that offers further resources, such as writing guides for including detail (and more) from which you can choose according to your needs. Extensive resources are available for instructors and are organized by discipline or by focus area. Harvard Study of Writing--In 1997, Harvard embarked on a four-year study of undergraduate writing and the results of this study are located at this address. Nancy Sommers and her colleagues make the statistical results of the study available via hyperlink and also made a 14-minute film version of the study’s findings. Perhaps the most compelling form of data presentation comes in another short film devoted solely for the purpose of response called Across the Drafts. In this 20-minute film, we meet a freshman comp student and his writing teacher conversing about their individual experiences in a writing classroom. Jon (the student) and Tom (the instructor) provide some practical anecdotes and offer some advice as to how to tailor feedback to which students are likely to be receptive. Selected Writing across the Curriculum/Writing in the Disciplines and College Writing Development Literature Anson, Chris. The WAC Casebook: Scenes for Faculty Reflection and Program Development, Oxford UP: 2002. http://www.oup.co.uk/isbn/0-19-512775-7. In this series of short case-studies, Chris Anson allows instructors to play the "what-if" game with a number of issues pertaining to WAC/WID. Readable and challenging, Anson asks us to think about complex issues such as designing writing assignments across courses, negotiating competing goals in an inter-disciplinary course setting and responding effectively to student writing. At the heart of this book, Anson asks teachers interested in WAC to consider what´s important about student writing and how it can best be used to enhance a student´s academic experience. For example, in the case titled "Showdown at Midwestern U.," a lively email exchange occurs between two professors with competing ideologies about the purpose of general education writing courses. Bob, a department chair of the economics department, comes to Sherry, the director of the Office of Campus Writing, arguing the purpose a first-year writing class should be to teach students how to construct arguments based on a synthesis of other literary texts. Sherry counters by arguing that since so much writing done in an academic setting is discipline-specific, developing a student´s ability to write about literature would serve only to make them more adept at writing about literature. At the end of this and other case-studies in the book, Anson provides a set of open-ended questions for readers to consider. These grand questions about writing´s different purposes across disciplines are what make this book a fine entry-point for those interested in familiarizing themselves with the ongoing conversations at work in WAC/WID. Anson, Chris M., John E. Schwiebert, and Michael M. Williamson. Writing across the Curriculum: An Annotated Bibliography. Westport, Conn.: Greenwood Press, 1993. This annotated bibliography of all things WAC is divided into two parts: Anson devotes the first third of the text to the theoretical framework behind the sub-genre and the rest of the book to discipline-specific texts about incorporating writing into the curriculum. A two-sentence description follows each entry. Bazerman, Charles, and David R. Russell, eds. Landmark Essays on Writing across the Curriculum. Davis, CA: Hermagoras Press, 1994. This collection consists of articles representing the range of WAC work over time. Pieces in the first section trace the history of the WAC movement from the first literacy crisis in the 1870´s in order to explain the reasons for the movement´s success and staying power. The second section explores programmatic and institutional projects ranging from studying student learning in individual courses to Fulwiler´s suggestions about how to make WAC initiatives work and Kinneavy´s description of the kinds of ways institutions approach WAC. The section ends with Maimon´s essay about the second stage of WAC, how we go about growing and nurturing WAC programs that are well established. Janet Emig´s oft cited article introduces the third section, which looks at WAC in the classroom. She explores why writing is such a unique way of learning in any context. Other studies examine what it is like for students to learn to write in different disciplinary classrooms. Bazerman, Charles, Joseph Little, Lisa Bethel, Teri Chavkin, Danielle Fouquette, and Janet Garufis, eds. Reference Guide to Writing Across the Curriculum. West Lafayette, Indiana: Parlor Press and The WAC Clearinghouse, 2005. The first part of the book is an overview of key terms and historical moments relevant to the Writing Across the Curriculum movement. Bazerman defines the difference b/w WAC and WID in terms of literacies, pedagogies and curricular initiatives. The second part of the book outlines 3 major approaches to theory and research in WAC: classroom writing practices, writing to learn, and rhetoric of inquiry. In higher education, students often do not see the personal or professional relevance of the goals of writing in disciplinary courses. Across studies students seem to share the "struggle to come to discover what it is they know, what it is they are committed to, and how those perceptions and commitments can be enacted in professional and academic ways" (47) suggesting that writing in any discipline needs to be made relevant to students" lives. The book suggests new directions for the WAC movement including writing intensive courses, Writing Center and peer tutor initiatives, interdisciplinary learning communities, service learning, and electronic communication across the disciplines. Finally, the book addresses the challenges educators and administrators face in assessing and evaluating student work and WAC programs. Offers further resources by discipline. Bean, John C. Engaging Ideas: The Professor´s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom. San Francisco: Jossey-Bass, 2001. "A pragmatic, nuts-and-bolts guide" for busy college professors across the disciplines, Engaging Ideas is designed to help teachers engage students in activities that support critical thinking and active learning. Divided into four sections, the book can be read linearly from front to back or can be easily searched depending on the needs of the reader. Grounded in the principles of Writing Across the Curriculum, sections take up the connection between thinking and writing, the creation of problem-based writing assignments, designing reading, writing and thinking activities for active learning in the classroom, and responding to student writing. Bean provides classroom examples from a range of courses with different formats, subject matter, learning goals and departmental circumstances. With suggestions that are easily adapted to particular learning situations, Bean´s is an incredibly useful hands-on resource. Carroll, Lee Ann. Rehearsing New Roles: How College Students Develop as Writers. Carbondale: Southern Illinois UP, 2002. Carroll presents a longitudinal study of college students throughout their college experience. Her findings suggest that first-year writing courses are useful, but not sufficient, in terms of supporting students" development as writers. Rather, she argues that faculty in the disciplines can and should develop strategies to support students" growth, proposing ways for instructors in specialized disciplines to build on the rhetorical skills and sensibilities students begin developing in first- year composition courses. She outlines six recommendations including "redesigning the literacy environment" of students´ majors so they work on complex literacy tasks over sequenced courses; providing "scaffolding" for development by explicitly teaching discipline specific genres and research strategies; and carefully designing projects that will challenge a range of students by emphasizing the process rather than only the product of writing assignments (129-410). Carter, Michael. "Ways of Knowing, Doing, and Writing in the Disciplines." CCC 58.3 385-418. This article responds to the common assertion that disciplines teach specialized knowledge while writing is a generalizable skill. To the contrary, Carter argues that ways of knowing in a particular discipline are closely tied to ways of doing in that discipline. Moreover, writing should be considered a vital way of doing that is best conceptualized and taught by experts in the discipline. In other words, disciplinary instructors have a responsibility, according to Carter, to use writing as a "means of teaching and evaluating" what students should be able to do and know in their major disciplines (409). Fulwiler, Toby , Art Young, eds. Language Connections: Writing and Reading Across the Curriculum. Urbana, Illinois: National Council of Teachers of English, 1982. http://wac.colostate.edu/books/language_connections/. Composed of essays authored by fourteen faculty members at Michigan Technological University, this collection argues teachers across the disciplines should use James Britton’s concept of "expressive" utterances as a frame for understanding and valuing students processes of coming to know. The contributors suggest in varying ways that teachers in all disciplines value students’ ideas in process as well as those ideas as represented in final products. Discussing students as developing readers and writers, the essays propose that in addition to activities encouraging students to write to learn, listening as well as small and large group discussion activities facilitate students’ engagement with language, therefore enhancing students’ understanding of course material. Cohen, Samuel. "Tinkering toward WAC Utopia." Journal of Basic Writing 21.2 (2002): 56−72. The article is intended for faculty and administrators involved in the early stages of implementing a Writing Across the Curriculum program as well as those in the process of rethinking the goals of an existing initiative. The author suggests Writing Across the Curriculum programs can and should serve as sites of educational reform. While faculty and leaders need to focus on teaching discipline−specific writing practices, students should also be made conscious of the social construction and history of disciplinary norms, thus making critical thinking skills central to the teaching of writing. Cohen argues faculty and administrators must work together from within existing disciplinary structures to develop shared goals for teaching students not only to use disciplinary discourses, but to critique them as well. Herrington, Anne, and Charles Moran, eds. Writing, Teaching, and Learning in the Disciplines. New York: MLA, 1992. The collection is made up of five sections. The first, "Historical Perspectives" describes the British and American origins of WAC. The second, "Disciplinary and Predisciplinary Theory" asks if we should focus on disciplinary knowledge or more general issues of teaching and learning that go beyond disciplines (45). Britton´s piece investigates how students use structures and tools they have developed in other experiences to make sense of new concepts and ideas. Bazerman, in "From Cultural Criticism to Disciplinary Participation: Living with Powerful Words" argues that we must make visible the historical, shifting, multivoiced make up of the disciplines so students see the social consequences of their work in these disciplines. Judith Langer explores the difference between making content or the rules of discourse the subject matter of our courses as opposed to making ways of inquiring and thinking central to the work of teaching and learning. In response to the call for teachers to develop more accurate language for talking about discipline-specific thinking and writing with students, Odell looks at patterns of thinking that influence how disciplinary instructors evaluate good writing. The final section examines the epistemological and ideological aspects of the disciplines and implications for writing, teaching and learning. Herrington, Anne and Marcia Curtis. Persons in Process. Urbana, IL: National Council of Teachers of English, 2000. Herrington and Curtis follow four college students over a period of their years of university study and theorize from their development as writers in several courses across disciplines. Drawing from Kohut´s "self-in-relationship to objects" psychoanalytic theory of personal development, they demonstrate that writing in all contexts is a more than merely an act of self expression and is rather a self-constituting act, one that is always carried out in relationship with others such as ones audience. Herrington and Curtis contend that because students" writing development is closely tied to identity development, teachers across the disciplines must learn to be empathetic, respectful and understanding listeners, responders and analysts as much of the personal contact students have with teacher is through writing. Furthermore, they argue students continue to develop as writers throughout their academic careers when writing is infused into the currriculum and therefore writing should be a part of courses across the disciplines. The authors offer specific examples. Hult, Christine A. Researching and Writing: Across the Curriculum. Boston: Allyn and Bacon, 1995. This text, designed primarily for students writing research papers in a variety of disciplines, gives a number of helpful examples of successful research papers that highlight the differences in conventions for each field of study. Hult recognizes the importance of giving discipline−specific guidance to the writing academic papers and includes work from fields such as business, social sciences and biology. Also included in this reader is a complete reference guide for the variety of citation formats students may encounter throughout their coursework. Kiefer, Kate. "Integrating Writing Into Any Course: Starting Points." AcademicWriting (2000) http://wac.colostate.edu/aw/teaching/kiefer2000.htm. In this article Kiefer offers concrete suggestions for faculty who desire to incorporate writing into their curriculum but may not know where to start. The author suggests teachers begin by articulating their goals for students and integrating writing by designing writing assignments that are meaningful and further their stated goals. Kiefer provides detailed strategies to help teachers to design meaningful research paper assignments, informal "writing−to−learn" assignments, as well as specific "writing in the disciplines" tasks. Finally, the author gives tips for assessing student writing. Light, Richard J. Making the Most of College. Cambridge: Harvard UP, 2001. At the heart of Light´s book is the premise that in order to foster significant learning experiences for students, we first need to provide students opportunities to voice their opinions, to fully listen to those opinions, and finally to include those voices in our campus development efforts if we hope to successfully increase active student engagement across the campus. Light interviews college students asking them a variety of questions about learning experiences in all areas of campus life. Specifically concerning writing, Light notes that student respondents reported wanting more discipline-specific writing instruction in their upper division courses and students reported that writing instruction was most effective when writing was incorporated throughout the semester in their courses. Additionally, Light notes that academic development and personal development are tied"as students learn and process their coursework they learn and change as individuals and subjects. Students find meaningful learning experiences when their academic coursework extends and connects with their lives outside of school. McLeod, Susan, and Margot Soven, eds. Writing across the Curriculum: A Guide to Developing Programs.Newbury Park, CA: Sage, 1992. This collection of essays offers faculty and administrators advice, models and examples of Writing Across the Curriculum initiatives. The articles range from suggestions for institutional consultants involved in the beginning stages of implementing WAC/WID to models for faculty dialogue across the disciplines concerning WAC/WID programs. Additionally, essays include resources for faculty to assist in developing and sustaining writing intensive courses and offer tools for supporting WAC/WID. The collection offers a list of further reading for those interested in starting WAC/WID initiatives. Monroe, Jonathan. Writing and Revising the Disciplines. Ithaca, NY: Cornell University Press, 2002. This collection includes essays from distinguished professors in nine different disciplines and provides analyses of various disciplinary discourses as well as writing strategies commonly employed in the respective disciplines. The collection functions as a useful tool for teachers and researchers across disciplines to identify disciplinary practices and read across these practices in order to come to a deeper understanding of how language is valued and used in the disciplines. Moss, A., and C. Holder. Improving Student Writing: A Guidebook for Faculty in All Disciplines. Dubugue, IA: Kendall Hunt, 1988. Moss and Holder argue for a collaborative approach to student writing within the disciplines. They assert student writing will improve the most through group projects that approximate real−life working situations and outline methods for faculty to enact these scenarios. In this guide, intended for faculty, the authors present strategies for assigning writing in the classroom, designing effective assignments and writing−based tests, and evaluating student writing. Orr, John C. "Instant Assessment: Using One−Minute Papers in Lower−Level Classes." Pedagogy 5.1 (2005): 108−111. In this article Orr describes his successful use of the "one−minute paper"—an exercise in student reflection and teacher assessment proposed by Richard Light. At the end of each class, Orr has his students write for one minute about what they learned and what they still do not understand. The author offers examples of how the technique allows him to gauge his students’ learning on a daily basis and assists him in planning future classes that are responsive to the needs of the majority of his students. He ultimately suggests that the technique is especially successful in lower−level courses where students are the most likely to be hesitant to voice concerns and raise questions. Palsberg, J., Baxter, S. J. "Teaching reviewing to graduate students." Communications of the ACM 45.12 (2002): 22−24. In this article Palsberg and Baxter describe the design of a graduate course in which the teaching of review writing, an important genre for a new Ph.D. to be able to master but one that is rarely a part of a Ph.D. education, is a primary goal. The authors demonstrate how practicing writing in this new genre facilitated the improvement of students’ writing and reading skills in other areas. The authors reflect on the benefits and limitations of such a course and offer suggestions for faculty interested in developing similar courses. Reiss, Donna, Dickie Selfe, and Art Young, eds. Electronic Communication across the Curriculum. Urbana, IL: National Council of Teachers of English, 1998. "Electronic Communication Across the Curriculum is an edited collection in which teachers and program heads throughout the United States present adaptable models of computer-supported communication using the pedagogies of writing for learning and writing with computers -- including science, math, history, philosophy, technical writing, accounting, literature, and marketing." The WAC Clearinghouse (http://wac.colostate.edu/bib/index.cfm?category=1) Reiss, Donna, Dickie Selfe, and Art Young, eds. Electronic Communication across the Curriculum. Urbana, IL: National Council of Teachers of English, 1998. This collection centers on the possibilities technology offers to enhance writing instruction and provides educators models of a wide−range of electronic communication tools that facilitate the teaching of writing across the disciplines. Ronald, Kate. "‘Befriending’ Other Teachers: Communities of Teaching and the Ethos of Curricular Leadership."Pedagogy 1.2 (2001): 317−326. Drawing on the work of Gregory Marshal, Ronald articulates the importance of community among teachers when working toward curricular change. Ronald illustrates connections between her experience developing and communicating to new teachers a revised curriculum for first year writing at Miami University and her work creating "a culture of writing" among Business faculty and students. The relationship between teaching and curriculum is vital, she argues, and it takes impassioned, honest, energetic teachers (and teachers of teachers) to productively motivate that relationship. Roost, Alisa. "Writing Intensive Courses in Theatre." Theatre Topics 13.2 (2003): 225−233. Roost outlines some qualities of students working in artistic majors, like Theatre, that might make them particularly resistant to the writing process and suggests ways teachers can respond to students’ needs by incorporating "low stakes" assignments in their courses. "Low stakes" activities do not influence students’ grades and ask students to explore ideas or experiment with genre, form and writing strategies and might lead students to reflect on creative processes or think critically about course concepts. Roost offers examples of both "low stakes" and "high stakes" assignments along with ideas (including peer review groups and cover letters from the writer) for making grading and responding to student writing manageable. Russell, David R. Writing in the Academic Disciplines, 1870−1990: A Curricular History. Carbondale: Southern Illinois UP, 1991. A review of David Russell´s book contends a discipline grows up when someone writes about its history and argues for its merit. If this is true, than Writing in the Academic Disciplines serves as the WAC/WID movement´s first car. Russell provides an exhaustive history of how writing has traditionally been conceptualized in academic settings and argues Writing Across the Curriculum is not merely a passing fad or a new way of talking about an already failed experiment. Instead, Russell demonstrates how WAC work has been a cornerstone of American intellectual life since Harvard´s founding in the 17th Century. While other histories of composition in the United States make for a more engaging read (think James Berlin´s Rhetoric for example), the conclusions pertinent to WAC/WID interests he comes to at the end of the book make it worthwhile. He contends one of the most important virtues of the current WAC/WID movement that similar ones lacked is a common language. He makes the case for structured spaces within institutions for both practical and scholarly WAC/WID work that will sustain and legitimize the movement. Sargent, M. Elizabeth. "Peer Response to Low Stakes Writing in a WAC Literature Classroom." New Directions For Teaching and Learning 69 (1997): 41−52. Sargent recounts her experience using peer response activities to help students in various literature courses engage more productively with difficult course readings in order to illustrate how peer response to low stakes writing in any course can become a productive way to manage the time it takes for a professor to respond regularly to writing assignments, as well as a useful framework for helping students learn from one another as they wrestle with complex course concepts. She shares strategies for modeling peer response for students and ideas for handling peer response in large introductory as well as smaller upper level courses. Ultimately, she explains, peer response allows her to get a sense of how students are interacting with the assigned readings; to design her lectures in response to students’ questions and interests; to model for students the developmental nature of literary interpretation; and help students practice writing to learn. Segall, Mary and Robert A. Smart. Direct from the Disciplines: Writing Across the Curriculum. Portsmouth NH: Heinemann Boynton/Cook, 2005. This practical book outlines core principles guiding WAC work, provides a case study of what one university´s WAC program looks like and gives a number of sample assignments from a variety of disciplines. Nine different academic disciplines are represented in the book and each articulates how the WAC principles outlined in the introduction are implemented in a specific kind of class. For example, a professor in computer science shows how he teaches the basic structure of computer programming by showing how similar writing code can be to organization in standard writing. Most chapters in the book begin with a professor´s rationale for why they chose to incorporate WAC/WID into their course design, then move to exactly what this looks like in practice through detailed course objectives and sample writing assignments. Also helpful are examples of student work interspersed throughout most of these narratives. Even though it´s easy to imagine someone picking up this book for what it can illuminate about teaching WAC/WID in their own specific discipline, the theoretical framework interspersed throughout the collection can be universally applied in most cases. Sternglass, Marilyn. Time to Know Them: A Longitudinal Study of Writing and Learning at the College Level.Mahwah, NJ: Lawrence Erlbaum Associates, 1997. In this longitudinal study of student writing, Sternglass examines the multiple facets of inquiry involved with students learning to write. She discusses their individual histories, their progress as students, as people, and as respondents to high−stakes writing assessment. The larger logic of Sternglass’ argument contends de−contextualized forms of assessment inhibit the progress of all students as writers, but is especially burdensome for students deemed remedial. Thaiss, Chris and Terry Myers Zawacki. Engaged Writers and Dynamic Disciplines. Portsmouth, NH: Boynton/Cook, 2006. Thaiss and Zawacki conduct a study at George Mason in which they interview professors from a variety of disciplines about how they write, how they teach writing and how they think about "alternative discourses," defined either as resistance against disciplinary norms or as alternate but acceptable forms. They interview students from across disciplines as well and find that students go through three stages as writers: 1) They believe they know what all teachers want because they have a sense of the "generic" standards most disciplines share; 2) They recognize differences in expectations as idiosyncratic; 3) They recognize differences as discipline and context specific. Students might not reach the final stage because "they do not become sufficiently invested in the discipline´s academic discourses, developing instead a greater connection to nonacademic audiences and exigencies" (110). The final chapter about pedagogical implications recommends ways to facilitate students" development into the third stage, and includes suggestions for faculty across the disciplines, composition program administrators and faculty development programs. The authors propose excellent ideas for assignments that help students develop awareness of themselves within and across disciplinary contexts. Walvoord, Barbara. Helping Students Write Well: A Guide for Teachers in All Disciplines. New York: Modern Language Association of America, 1982. While students do learn transferable skills in composition courses, argues Walvoord, they must continue to write across the curriculum in order to learn how to apply what they learn in composition class to thinking, learning and communicating in the disciplines. Positioning teachers in all disciplines as coaches of writing, Walvoord describes ways to make writing meaningful to the course and discipline-specific subject matter; how to respond to student writing and guide students to respond to their own and their peers" writing; how to address particular challenges students face as writers; and how to help students become better writers while using writing to learn in all disciplines (5). Walvoord, Barbara, Linda Lawrence Hunt, H. Fil Dowling, and Jean McMahon. In the Long Run. Urbana, IL: National Council of Teachers of English, 1997. Walvoord and her fellow researchers study teachers at three post-secondary institutions in order to determine their expectations for Writing Across the Curriculum programs; how the teachers interpreted their WAC experiences; and how WAC experiences influenced teachers" philosophies and pedagogies of teaching; and how WAC experiences effected the career patterns of participating teachers. While chapter seven explores successful and not-so-successful WAC teaching strategies by way of annotated assignment sheets and interviews with faculty, findings from the study suggest that one of the most important thing teachers took away from WAC experiences was the desire and ability to alter their teaching philosophies and the confidence to experiment in order to develop strategies appropriate to those philosophies. Young, Art. Teaching Writing Across the Curriculum, Third Edition. Upper Saddle River, New Jersey: Prentice Hall, 1997/1999. http://wac.colostate.edu/aw/teaching/kiefer2000.htm. In the opening pages of his short book, Young takes readers through a process of reading student writing modeled after Writing Across the Curriculum workshops he’s facilitated with faculty. His goal is to illustrate the difference between assignments that ask students to use writing to learn course material and assignments that require them to use writing to communicate what they have learned. Each type of assignment requires that teachers read and respond to student writing differently. Young elaborates on the pedagogical purposes of writing to learn and writing to communicate, offering teaching strategies and classroom activities for each. Zerbe, Michael. Composition and the Rhetoric of Science: Engaging the Dominant Discourse. Carbondale: Southern Illinois University Press, 2007. Zerbe argues that scientific discourse is the major discourse of our time. Compositionists cannot claim to help our students develop the skills and mentalities needed for meaningful civic participation, a goal often strived for by rhetoric and composition programs, if we do not teach them to understand and engage the complexity of scientific discourse. Not only should students and teachers in composition classrooms work toward a more sophisticated "scientific literacy," but instructors in scientific disciplines need to help students become conscious of the rhetorical, historical, economic, social, ethical and personal aspects of the discourse in which they are learning to communicate. Zerbe offers concrete scenarios and suggests specific ways teachers from composition and scientific disciplines can help students develop this vital literacy.
It’s the end of term and you have to study for your art history exam… Make like Dewey the Library Cat and hit the books… The classic art history test and exam format is the dreaded two-slide comparison. Your professor is asking you a leading question about two works in order to get you to prove that you paid attention in class. The following are slide exam study tips that I wrote upon the request of my students of Italian Renaissance art, but these concepts can apply to any period. Please note that each instructor has different exam rules and advice, so these are guidelines only. How to study? - Start early and review often. - Make flashcards with artist, title, date, and relevant facts/points. What was said about the work in class may be relevant to developing your conclusion. If you do this, you’ll come up with a better conclusion than if you just base your statement on visual elements. - Think about which images will show up as comparisons (see below). - Try to practise these comparisons by thinking about similarities and differences between the works. - If you have trouble memorizing dates, think about works in relation to each other, and/or in relation to a few “key” dates. If you’re familiar with style, you can often deduce date based on this information. Make a timeline. - Find out what your professor accepts in terms of date range. In my classes, for most questions, rounding date to the DECADE is acceptable. What two images make a good comparison? - Two works of the same subject so you can see subject matter treated differently in two artists or two media. - By the same artist at different points in his career. - Similar composition but different subject. - Works by artists who are closely related for one reason or another (for example, teacher/student). - Works with something in common like perspective or naturalism or theory or ??? - Two works that permit you to learn something about one or both of them — for example the image that is the source of another that might determine date or artistic influence. How to develop a conclusion Larger conclusions are better than smaller conclusions. These are some examples of what I mean: - What these 2 works tell you about the period of art history studied. - What they tell you about one artist’s oeuvre, like changes within his career in terms of style, or x is earlier than y. NB: if you conclude this, your dates in the slide ID MUST correspond to your conclusion!! - Relationships between 2 artists on a larger scale (x was influenced by y) - Regional or time differences. - Sometimes the conclusion might be about a specific piece of information, like x is the source of y (and hence is the earlier work). - There are many more types of answers – think for yourself and try to relate the comparison to the course as a whole.
The Buddha (fl. circa 450 BCE) is the individual whose teachings form the basis of the Buddhist tradition. These teachings, preserved in texts known as the Nikāyas or Āgamas, concern the quest for liberation from suffering. While the ultimate aim of the Buddha's teachings is thus to help individuals attain the good life, his analysis of the source of suffering centrally involves claims concerning the nature of persons, as well as how we acquire knowledge about the world and our place in it. These teachings formed the basis of a philosophical tradition that developed and defended a variety of sophisticated theories in metaphysics and epistemology. - 1. Buddha as Philosopher - 2. Core Teachings - 3. Non-Self - 4. Karma and Rebirth - 5. Attitude toward Reason - Academic Tools - Other Internet Resources - Related Entries This entry concerns the historical individual, traditionally called Gautama, who is identified by modern scholars as the founder of Buddhism. According to Buddhist teachings, there have been other Buddhas in the past, and there will be yet more in the future. The title ‘Buddha’, which literally means ‘awakened’, is conferred on an individual who discovers the path to nirvana, the cessation of suffering, and propagates that discovery so that others may also achieve nirvana. If the teaching that there have been other Buddhas is true, then Gautama is not the founder of Buddhism. This entry will follow modern scholarship in taking an agnostic stance on the question of whether there have been other Buddhas, and likewise for questions concerning the superhuman status and powers that some Buddhists attribute to Buddhas. The concern of this entry is just those aspects of the thought of the historical individual Gautama that bear on the development of the Buddhist philosophical tradition. The Buddha will here be treated as a philosopher. To so treat him is controversial, but before coming to why that should be so, let us first rehearse those basic aspects of the Buddha's life and teachings that are relatively non-controversial. Tradition has it that Gautama lived to age 80. Up until recently his dates were thought to be approximately 560–480 BCE, but many scholars now hold that he must have died around 405 BCE. He was born into a family of some wealth and power, members of the Śākya clan, in the area of the present border between India and Nepal. The story is that in early adulthood he abandoned his comfortable life as a householder (as well as his wife and young son) in order to seek a solution to the problem of existential suffering. He first took up with a number of different wandering ascetics (śramanas) who claimed to know the path to liberation from suffering. Finding their teachings unsatisfactory, he struck out on his own, and through a combination of insight and meditational practice attained the state of enlightenment (bodhi) which is said to represent the cessation of all further suffering. He then devoted the remaining 45 years of his life to teaching others the insights and techniques that had led him to this achievement. Gautama could himself be classified as one of the śramanas. That there existed such a phenomenon as the śramanas tells us that there was some degree of dissatisfaction with the customary religious practices then prevailing in the Gangetic basin of North India. These practices consisted largely in the rituals and sacrifices prescribed in the Vedas. Among the śramanas there were many, including the Buddha, who rejected the authority of the Vedas as definitive pronouncements on the nature of the world and our place in it (and for this reason are called ‘heterodox’). But within the Vedic canon itself there is a stratum of (comparatively late) texts, the Upaniṣads, that likewise displays disaffection with Brahmin ritualism. Among the new ideas that figure in these (‘orthodox’) texts, as well as in the teachings of those heterodox śramanas whose doctrines are known to us, are the following: that sentient beings (including humans, non-human animals, gods, and the inhabitants of various hells) undergo rebirth; that rebirth is governed by the causal laws of karma (good actions cause pleasant fruit for the agent, evil actions cause unpleasant fruit, etc.); that continual rebirth is inherently unsatisfactory; that there is an ideal state for sentient beings involving liberation from the cycle of rebirth; and that attaining this state requires overcoming ignorance concerning one's true identity. Various views are offered concerning this ignorance and how to overcome it. The Bhagavad Gītā (classified by some orthodox schools as an Upaniṣad) lists four such methods, and discusses at least two separate views concerning our identity: that there is a plurality of distinct selves, each being the true agent of a person's actions and the bearer of karmic merit and demerit but existing separately from the body and its associated states; and that there is just one self, of the nature of pure consciousness (a ‘witness’) and identical with the essence of the cosmos, Brahman or pure undifferentiated Being. The Buddha agreed with those of his contemporaries embarked on the same soteriological project that it is ignorance about our identity that is responsible for suffering. What sets his teachings apart (at this level of analysis) lies in what he says that ignorance consists in: the conceit that there is an ‘I’ and a ‘mine’. This is the famous Buddhist teaching of non-self (anātman). And it is with this teaching that the controversy begins concerning whether Gautama may legitimately be represented as a philosopher. First there are those who (correctly) point out that the Buddha never categorically denies the existence of a self that transcends what is empirically given, namely the five skandhas or psychophysical elements. While the Buddha does deny that any of the psychophysical elements is a self, these interpreters claim that he at least leaves open the possibility that there is a self that is transcendent in the sense of being non-empirical. To this it may be objected that all of classical Indian philosophy—Buddhist and orthodox alike—understood the Buddha to have denied the self tout court. To this it is sometimes replied that the later philosophical tradition simply got the Buddha wrong, at least in part because the Buddha sought to indicate something that cannot be grasped through the exercise of philosophical rationality. On this interpretation, the Buddha should be seen not as a proponent of the philosophical methods of analysis and argumentation, but rather as one who sees those methods as obstacles to final release. Another reason one sometimes encounters for denying that the Buddha is a philosopher is that he rejects the characteristically philosophical activity of theorizing about matters that lack evident practical application. On this interpretation as well, those later Buddhist thinkers who did go in for the construction of theories about the ultimate nature of everything simply failed to heed or properly appreciate the Buddha's advice that we avoid theorizing for its own sake and confine our attention to those matters that are directly relevant to liberation from suffering. On this view the teaching of non-self is not a bit of metaphysics, just some practical advice to the effect that we should avoid identifying with things that are transitory and so bound to yield dissatisfaction. What both interpretations share is the assumption that it is possible to arrive at what the Buddha himself thought without relying on the understanding of his teachings developed in the subsequent Buddhist philosophical tradition. This assumption may be questioned. Our knowledge of the Buddha's teachings comes by way of texts that were not written down until several centuries after his death, are in languages (Pāli, and Chinese translations of Sanskrit) other than the one he is likely to have spoken, and disagree in important respects. The first difficulty may not be as serious as it seems, given that the Buddha's discourses were probably rehearsed shortly after his death and preserved through oral transmission until the time they were committed to writing. And the second need not be insuperable either. But the third is troubling, in that it suggests textual transmission involved processes of insertion and deletion in aid of one side or another in sectarian disputes. Our ancient sources attest to this: one will encounter a dispute among Buddhist thinkers where one side cites some utterance of the Buddha in support of their position, only to have the other side respond that the text from which the quotation is taken is not universally recognized as authoritatively the word of the Buddha. This suggests that our record of the Buddha's teaching may be colored by the philosophical elaboration of those teachings propounded by later thinkers in the Buddhist tradition. Some scholars are more sanguine than others about the possibility of overcoming this difficulty, and thereby getting at what the Buddha himself had thought, as opposed to what later Buddhist philosophers thought he had thought. No position will be taken on this dispute here. We will be treating the Buddha's thought as it was understood within the later philosophical tradition that he had inspired. The resulting interpretation may or may not be faithful to his intentions. It is at least logically possible that he believed there to be a transcendent self that can only be known by mystical intuition, or that the exercise of philosophical rationality leads only to sterile theorizing and away from real emancipation. What we can say with some assurance is that this is not how the Buddhist philosophical tradition understood him. It is their understanding that will be the subject of this essay. The Buddha's basic teachings are usually summarized using the device of the Four Noble Truths: - There is suffering. - There is the origination of suffering. - There is the cessation of suffering. - There is a path to the cessation of suffering. The first of these claims might seem obvious, even when ‘suffering’ is understood to mean not mere pain but existential suffering, the sort of frustration, alienation and despair that arise out of our experience of transitoriness. But there are said to be different levels of appreciation of this truth, some quite subtle and difficult to attain; the highest of these is said to involve the realization that everything is of the nature of suffering. Perhaps it is sufficient for present purposes to point out that while this is not the implausible claim that all of life's states and events are necessarily experienced as unsatisfactory, still the realization that all (oneself included) is impermanent can undermine a precondition for real enjoyment of the events in a life: that such events are meaningful by virtue of their having a place in an open-ended narrative. It is with the development and elaboration of (2) that substantive philosophical controversy begins. (2) is the simple claim that there are causes and conditions for the arising of suffering. (3) then makes the obvious point that if the origination of suffering depends on causes, future suffering can be prevented by bringing about the cessation of those causes. (4) specifies a set of techniques that are said to be effective in such cessation. Much then hangs on the correct identification of the causes of suffering. The answer is traditionally spelled out in a list consisting of twelve links in a causal chain that begins with ignorance and ends with suffering (represented by the states of old age, disease and death). Modern scholarship has established that this list is a later compilation. For the texts that claim to convey the Buddha's own teachings give two slightly different formulations of this list, and shorter formulations containing only some of the twelve items are also found in the texts. But it seems safe to say that the Buddha taught an analysis of the origins of suffering roughly along the following lines: given the existence of a fully functioning assemblage of psychophysical elements (the parts that make up a sentient being), ignorance concerning the three characteristics of sentient existence—suffering, impermanence and non-self—will lead, in the course of normal interactions with the environment, to appropriation (the identification of certain elements as ‘I’ and ‘mine’). This leads in turn to the formation of attachments, in the form of desire and aversion, and the strengthening of ignorance concerning the true nature of sentient existence. These ensure future rebirth, and thus future instances of old age, disease and death, in a potentially unending cycle. The key to escape from this cycle is said to lie in realization of the truth about sentient existence—that it is characterized by suffering, impermanence and non-self. But this realization is not easily achieved, since acts of appropriation have already made desire, aversion and ignorance deeply entrenched habits of mind. Thus the measures specified in (4) include various forms of training designed to replace such habits with others that are more conducive to seeing things as they are. Training in meditation is also prescribed, as a way of enhancing one's observational abilities, especially with respect to one's own psychological states. Insight is cultivated through the use of these newly developed observational powers, as informed by knowledge acquired through the exercise of philosophical rationality. There is a debate in the later tradition as to whether final release can be attained through theoretical insight alone, through meditation alone, or only by using both techniques. Ch'an, for instance, is based on the premise that enlightenment can be attained through meditation alone, whereas Theravāda advocates using both but also holds that analysis alone may be sufficient for some. (This disagreement begins with a dispute over how to interpret D I.77–84.) The third option seems the most plausible, but the first is certainly of some interest given its suggestion that one can attain the ideal state for humans just by doing philosophy. The Buddha seems to have held (2) to constitute the core of his discovery. He calls his teachings a ‘middle path’ between two extreme views, and it is this claim concerning the causal origins of suffering that he identifies as the key to avoiding those extremes. The extremes are eternalism, the view that persons are eternal, and annihilationism, the view that persons go utterly out of existence (usually understood to mean at death, though a term still shorter than one lifetime is not ruled out). It will be apparent that eternalism requires the existence of the sort of self that the Buddha denies. What is not immediately evident is why the denial of such a self is not tantamount to the claim that the person is annihilated at death (or even sooner, depending on just how impermanent one takes the psychophysical elements to be). The solution to this puzzle lies in the fact that eternalism and annihilationism both share the presupposition that there is an ‘I’ whose existence might either extend beyond death or terminate at death. The idea of the ‘middle path’ is that all of life's continuities can be explained in terms of facts about a causal series of psychophysical elements. There being nothing more than a succession of these impermanent, impersonal events and states, the question of the ultimate fate of this ‘I’, the supposed owner of these elements, simply does not arise. This reductionist view of sentient beings was later articulated in terms of the distinction between two kinds of truth, conventional and ultimate. Each kind of truth has its own domain of objects, the things that are only conventionally real and the things that are ultimately real respectively. Conventionally real entities are those things that are accepted as real by common sense, but that turn out on further analysis to be wholes compounded out of simpler entities and thus not strictly speaking real at all. The stock example of a conventionally real entity is the chariot, which we take to be real only because it is more convenient, given our interests and cognitive limitations, to have a single name for the parts when assembled in the right way. Since our belief that there are chariots is thus due to our having a certain useful concept, the chariot is said to be a mere conceptual fiction. (This does not, however, mean that all conceptualization is falsification; only concepts that allow of reductive analysis lead to this artificial inflation of our ontology, and thus to a kind of error.) Ultimately real entities are those ultimate parts into which conceptual fictions are analyzable. An ultimately true statement is one that correctly describes how certain ultimately real entities are arranged. A conventionally true statement is one that, given how the ultimately real entities are arranged, would correctly describe certain conceptual fictions if they also existed. The ultimate truth concerning the relevant ultimately real entities helps explain why it should turn out to be useful to accept conventionally true statements (such as ‘King Milinda rode in a chariot’) when the objects described in those statements are mere fictions. Using this distinction between the two truths, the key insight of the ‘middle path’ may be expressed as follows. The ultimate truth about sentient beings is just that there is a causal series of impermanent, impersonal psychophysical elements. Since these are all impermanent, and lack other properties that would be required of an essence of the person, none of them is a self. But given the right arrangement of such entities in a causal series, it is useful to think of them as making up one thing, a person. It is thus conventionally true that there are persons, things that endure for a lifetime and possibly (if there is rebirth) longer. This is conventionally true because generally speaking there is more overall happiness and less overall pain and suffering when one part of such a series identifies with other parts of the same series. For instance, when the present set of psychophysical elements identifies with future elements, it is less likely to engage in behavior (such as smoking) that results in present pleasure but far greater future pain. The utility of this convention is, however, limited. Past a certain point—namely the point at which we take it too seriously, as more than just a useful fiction—it results in existential suffering. The cessation of suffering is attained by extirpating all sense of an ‘I’ that serves as agent and owner. The Buddha's ‘middle path’ strategy can be seen as one of first arguing that there is nothing that the word ‘I’ genuinely denotes, and then explaining that our erroneous sense of an ‘I’ stems from our employment of the useful fiction represented by the concept of the person. While the second part of this strategy only receives its full articulation in the later development of the theory of two truths, the first part can be found in the Buddha's own teachings, in the form of several philosophical arguments for non-self. Best known among these is the argument from impermanence (S III.66–8), which has this basic structure: - If there were a self it would be permanent. - None of the five kinds of psychophysical element is permanent. - ∴ There is no self. It is the fact that this argument does not contain a premise explicitly asserting that the five skandhas (classes of psychophysical element) are exhaustive of the constituents of persons, plus the fact that these are all said to be empirically observable, that leads some to claim that the Buddha did not intend to deny the existence of a self tout court. There is, however, evidence that the Buddha was generally hostile toward attempts to establish the existence of unobservable entities. In the Pohapāda Sutta (D I.178–203), for instance, the Buddha compares someone who posits an unseen seer in order to explain our introspective awareness of cognitions, to a man who has conceived a longing for the most beautiful woman in the world based solely on the thought that such a woman must surely exist. And in the Tevijja Sutta (D I.235–52), the Buddha rejects the claim of certain Brahmins to know the path to oneness with Brahman, on the grounds that no one has actually observed this Brahman. This makes more plausible the assumption that the argument has as an implicit premise the claim that there is no more to the person than the five skandhas. Premise (1) appears to be based on the assumption that persons undergo rebirth, together with the thought that one function of a self would be to account for diachronic personal identity. By ‘permanent’ is here meant continued existence over at least several lives. This is shown by the fact that the Buddha rules out the body as a self on the grounds that the body exists for just one lifetime. (This also demonstrates that the Buddha did not mean by ‘impermanent’ what some later Buddhist philosophers meant, viz., existing for just a moment; the Buddhist doctrine of momentariness represents a later development.) The mental entities that make up the remaining four types of psychophysical element might seem like more promising candidates, but these are ruled out on the grounds that these all originate in dependence on contact between sense faculty and object, and last no longer than a particular sense-object-contact event. That he listed five kinds of psychophysical element, and not just one, shows that the Buddha embraced a kind of dualism. But this strategy for demonstrating the impermanence of the psychological elements shows that his dualism was not the sort of mind-body dualism familiar from substance ontologies like those of Descartes and of the Nyāya school of orthodox Indian philosophy. Instead of seeing the mind as the persisting bearer of such transient events as occurrences of cognition, feeling and volition, he treats ‘mind’ as a kind of aggregate term for bundles of transient mental events. These events being impermanent, they too fail to account for diachronic personal identity in the way in which a self might be expected to. Another argument for non-self, which might be called the argument from control (S III.66–8), has this structure: - If there were a self, one could never desire that it be changed. - Each of the five kinds of psychophysical element is such that one can desire that it be changed. - ∴ There is no self. Premise (1) is puzzling. It appears to presuppose that the self should have complete control over itself, so that it would effortlessly adjust its state to its desires. That the self should be thought of as the locus of control is certainly plausible. Those Indian self-theorists who claim that the self is a mere passive witness recognize that the burden of proof is on them to show that the self is not an agent. But it seems implausibly demanding to require of the self that it have complete control over itself. We do not require that vision see itself if it is to see other things. The case of vision suggests an alternative interpretation, however. We might hold that vision does not see itself for the reason that this would violate an irreflexivity principle, to the effect that an entity cannot operate on itself. Indian philosophers who accept this principle cite supportive instances such as the knife that cannot cut itself and the finger-tip that cannot touch itself. If this principle is accepted, then if the self were the locus of control it would follow that it could never exercise this function on itself. A self that was the controller could never find itself in the position of seeking to change its state to one that it deemed more desirable. On this interpretation, the first premise seems to be true. And there is ample evidence that (2) is true: it is difficult to imagine a bodily or psychological state over which one might not wish to exercise control. Consequently, given the assumption that the person is wholly composed of the psychophysical elements, it appears to follow that a self of this description does not exist. These two arguments appear, then, to give good reason to deny a self that might ground diachronic personal identity and serve as locus of control, given the assumption that there is no more to the person than the empirically given psychophysical elements. But it now becomes something of a puzzle how one is to explain diachronic personal identity and agency. To start with the latter, does the argument from control not suggest that control must be exercised by something other than the psychophysical elements? This was precisely the conclusion of the Sāṃkhya school of orthodox Indian philosophy. One of their arguments for the existence of a self was that it is possible to exercise control over all the empirically given constituents of the person; while they agree with the Buddha that a self is never observed, they take the phenomena of agency to be grounds for positing a self that transcends all possible experience. This line of objection to the Buddha's teaching of non-self is more commonly formulated in response to the argument from impermanence, however. Perhaps its most dramatic form is aimed at the Buddha's acceptance of the doctrines of karma and rebirth. It is clear that the body ceases to exist at death. And given the Buddha's argument that mental states all originate in dependence on sense-object contact events, it seems no psychological constituent of the person can transmigrate either. Yet the Buddha claims that persons who have not yet achieved enlightenment will be reborn as sentient beings of some sort after they die. If there is no constituent whatever that moves from one life to the next, how could the being in the next life be the same person as the being in this life? This question becomes all the more pointed when it is added that rebirth is governed by karma, something that functions as a kind of cosmic justice: those born into fortunate circumstances do so as a result of good deeds in prior lives, while unpleasant births result from evil past deeds. Such a system of reward and punishment could be just only if the recipient of pleasant or unpleasant karmic fruit is the same person as the agent of the good or evil action. And the opponent finds it incomprehensible how this could be so in the absence of a persisting self. It is not just classical Indian self-theorists who have found this objection persuasive. Some Buddhists have as well. Among these Buddhists, however, this has led to the rejection not of non-self but of rebirth. (Historically this response was not unknown among East Asian Buddhists, and it is not rare among Western Buddhists today.) The evidence that the Buddha himself accepted rebirth and karma seems quite strong, however. The later tradition would distinguish between two types of discourse in the body of the Buddha's teachings: those intended for an audience of householders seeking instruction from a sage, and those intended for an audience of monastic renunciates already versed in his teachings. And it would be one thing if his use of the concepts of karma and rebirth were limited to the former. For then such appeals could be explained away as another instance of the Buddha's pedagogical skill (commonly referred to as upāya). The idea would be that householders who fail to comply with the most basic demands of morality are not likely (for reasons to be discussed shortly) to make significant progress toward the cessation of suffering, and the teaching of karma and rebirth, even if not strictly speaking true, does give those who accept it a (prudential) reason to be moral. But this sort of ‘noble lie’ justification for the Buddha teaching a doctrine he does not accept fails in the face of the evidence that he also taught it to quite advanced monastics (e.g., A III.33). And what he taught is not the version of karma popular in certain circles today, according to which, for instance, an act done out of hatred makes the agent somewhat more disposed to perform similar actions out of similar motives in the future, which in turn makes negative experiences more likely for the agent. What the Buddha teaches is instead the far stricter view that each action has its own specific consequence for the agent, the hedonic nature of which is determined in accordance with causal laws and in such a way as to require rebirth as long as action continues. So if there is a conflict between the doctrine of non-self and the teaching of karma and rebirth, it is not to be resolved by weakening the Buddha's commitment to the latter. The Sanskrit term karma literally means ‘action’. What is nowadays referred to somewhat loosely as the theory of karma is, speaking more strictly, the view that there is a causal relationship between action (karma) and ‘fruit’ (phala), the latter being an experience of pleasure, pain or indifference for the agent of the action. This is the view that the Buddha appears to have accepted in its most straightforward form. Actions are said to be of three types: bodily, verbal and mental. The Buddha insists, however, that by action is meant not the movement or change involved, but rather the volition or intention that brought about the change. As Gombrich (2009) points out, the Buddha's insistence on this point reflects the transition from an earlier ritualistic view of action to a view that brings action within the purview of ethics. For it is when actions are seen as subject to moral assessment that intention becomes relevant. One does not, for instance, perform the morally blameworthy action of speaking insultingly to an elder just by making sounds that approximate to the pronunciation of profanities in the presence of an elder; parrots and prelinguistic children can do as much. What matters for moral assessment is the mental state (if any) that produced the bodily, verbal or mental change. And it is the occurrence of these mental states that is said to cause the subsequent occurrence of hedonically good, bad and neutral experiences. More specifically, it is the occurrence of the three ‘defiled’ mental states that brings about karmic fruit. The three defilements (kleśas) are desire, aversion and ignorance. And we are told quite specifically (A III.33) that actions performed by an agent in whom these three defilements have been destroyed do not have karmic consequences; such an agent is experiencing their last birth. Some caution is required in understanding this claim about the defilements. The Buddha seems to be saying that it is possible to act not only without ignorance, but also in the absence of desire or aversion, yet it is difficult to see how there could be intentional action without some positive or negative motivation. To see one's way around this difficulty, one must realize that by ‘desire’ and ‘aversion’ are meant those positive and negative motives respectively that are colored by ignorance, viz. ignorance concerning suffering, impermanence and non-self. Presumably the enlightened person, while knowing the truth about these matters, can still engage in motivated action. Their actions are not based on the presupposition that there is an ‘I’ for which those actions can have significance. Ignorance concerning these matters perpetuates rebirth, and thus further occasions for existential suffering, by facilitating a motivational structure that reinforces one's ignorance. We can now see how compliance with common-sense morality could be seen as an initial step on the path to the cessation of suffering. While the presence of ignorance makes all action—even that deemed morally good—karmically potent, those actions commonly considered morally evil are especially powerful reinforcers of ignorance, in that they stem from the assumption that the agent's welfare is of paramount importance. While recognition of the moral value of others may still involve the conceit that there is an ‘I’, it can nonetheless constitute progress toward dissolution of the sense of self. This excursus into what the Buddha meant by karma may help us see how his middle path strategy could be used to reply to the objection to non-self from rebirth. That objection was that the reward and punishment generated by karma across lives could never be deserved in the absence of a transmigrating self. The middle path strategy generally involves locating and rejecting an assumption shared by a pair of extreme views. In this case the views will be (1) that the person in the later life deserves the fruit generated by the action in the earlier life, and (2) that this person does not deserve the fruit. One assumption shared by (1) and (2) is that persons deserve reward and punishment depending on the moral character of their actions, and one might deny this assumption. But that would be tantamount to moral nihilism, and a middle path is said to avoid nihilisms (such as annihilationism). A more promising alternative might be to deny that there are ultimately such things as persons that could bear moral properties like desert. This is what the Buddha seems to mean when he asserts that the earlier and the later person are neither the same nor different (S II.62; S II.76; S II.113). Since any two existing things must be either identical or distinct, to say of the two persons that they are neither is to say that strictly speaking they do not exist. This alternative is more promising because it avoids moral nihilism. For it allows one to assert that persons and their moral properties are conventionally real. To say this is to say that given our interests and cognitive limitations, we do better at achieving our aim—minimizing overall pain and suffering—by acting as though there are persons with morally significant properties. Ultimately there are just impersonal entities and events in causal sequence: ignorance, the sorts of desires that ignorance facilitates, an intention formed on the basis of such a desire, a bodily, verbal or mental action, a feeling of pleasure, pain or indifference, and an occasion of suffering. The claim is that this situation is usefully thought of as, for instance, a person who performs an evil deed due to their ignorance of the true nature of things, receives the unpleasant fruit they deserve in the next life, and suffers through their continuing on the wheel of saṃsāra. It is useful to think of the situation in this way because it helps us locate the appropriate places to intervene to prevent future pain (the evil deed) and future suffering (ignorance). It is no doubt quite difficult to believe that karma and rebirth exist in the form that the Buddha claims. It is said that their existence can be confirmed by those who have developed the power of retrocognition through advanced yogic technique. But this is of little help to those not already convinced that meditation is a reliable means of knowledge. What can be said with some assurance is that karma and rebirth are not inconsistent with non-self. Rebirth without transmigration is logically possible. When the Buddha says that a person in one life and the person in another life are neither the same nor different, one's first response might be to take ‘different’ to mean something other than ‘not the same’. But while this is possible in English given the ambiguity of ‘the same’, it is not possible in the Pāli source, where the Buddha is represented as unambiguously denying both numerical identity and numerical distinctness. This has led some to wonder whether the Buddha does not employ a deviant logic. Such suspicions are strengthened by those cases where the options are not two but four, cases of the so-called tetralemma (catuṣkoṭi). For instance, when the Buddha is questioned about the post-mortem status of the enlightened person or arhat (e.g., at M I.483–8) the possibilities are listed as: (1) the arhat continues to exist after death, (2) does not exist after death, (3) both exists and does not exist after death, and (4) neither exists nor does not exist after death. When the Buddha rejects both (1) and (2) we get a repetition of ‘neither the same nor different’. But when he goes on to entertain, and then reject, (3) and (4) the logical difficulties are compounded. Since each of (3) and (4) appears to be formally contradictory, to entertain either is to entertain the possibility that a contradiction might be true. And their denial seems tantamount to affirmation of excluded middle, which is prima facie incompatible with the denial of both (1) and (2). One might wonder whether we are here in the presence of the mystical. There were some Buddhist philosophers who took ‘neither the same nor different’ in this way. These were the Personalists (Pudgalavādins), who were so called because they affirmed the ultimate existence of the person as something named and conceptualized in dependence on the psychophysical elements. They claimed that the person is neither identical with nor distinct from the psychophysical elements. They were prepared to accept, as a consequence, that nothing whatever can be said about the relation between person and elements. But their view was rejected by most Buddhist philosophers, in part on the grounds that it quickly leads to an ineffability paradox: one can say neither that the person's relation to the elements is inexpressible, nor that it is not inexpressible. The consensus view was instead that the fact that the person can be said to be neither identical with nor distinct from the elements is grounds for taking the person to be a mere conceptual fiction. Concerning the persons in the two lives, they understood the negations involved in ‘neither the same nor different’ to be of the commitmentless variety, i.e., to function like illocutionary negation. If we agree that the statement ‘7 is green’ is semantically ill-formed, on the grounds that abstract objects such as numbers do not have colors, then we might go on to say, ‘Do not say that 7 is green, and do not say that it is not green either’. There is no contradiction here, since the illocutionary negation operator ‘do not say’ generates no commitment to an alternative characterization. There is also evidence that claims of type (3) involve parameterization. For instance, the claim about the arhat would be that there is some respect in which they can be said to exist after death, and some other respect in which they can be said to no longer exist after death. Entertaining such a proposition does not require that one believe there might be true contradictions. And while claims of type (4) would seem to be logically equivalent to those of type (3) (regardless of whether or not they involve parameterization), the tradition treated this type as asserting that the subject is beyond all conceptualization. To reject the type (4) claim about the arhat is to close off one natural response to the rejections of the first three claims: that the status of the arhat after death transcends rational understanding. That the Buddha rejected all four possibilities concerning this and related questions is not evidence that he employed a deviant logic. The Buddha's response to questions like those concerning the arhat is sometimes cited in defense of a different claim about his attitude toward rationality. This is the claim that the Buddha was essentially a pragmatist, someone who rejects philosophical theorizing for its own sake and employs philosophical rationality only to the extent that doing so can help solve the practical problem of eliminating suffering. The Buddha does seem to be embracing something like this attitude when he defends his refusal to answer questions like that about the arhat, or whether the series of lives has a beginning, or whether the living principle (jīva) is identical with the body. He calls all the possible views with respect to such questions distractions insofar as answering them would not lead to the cessation of the defilements and thus to the end of suffering. And in a famous simile (M I.429) he compares someone who insists that the Buddha answer these questions to someone who has been wounded by an arrow but will not have the wound treated until they are told who shot the arrow, what sort of wood the arrow is made of, and the like. Passages such as these surely attest to the great importance the Buddha placed on sharing his insights to help others overcome suffering. But this is consistent with the belief that philosophical rationality may be used to answer questions that lack evident connection with pressing practical concerns. And on at least one occasion the Buddha does just this. Pressed to give his answers to the questions about the arhat and the like, the Buddha first rejects all the possibilities of the tetralemma, and defends his refusal on the grounds that such theories are not conducive to liberation from saṃsāra. But when his questioner shows signs of thereby losing confidence in the value of the Buddha's teachings about the path to the cessation of suffering, the Buddha responds with the example of a fire that goes out after exhausting its fuel. If one were asked where this fire has gone, the Buddha points out, one could consistently deny that it has gone to the north, to the south, or in any other direction. This is so for the simple reason that the questions ‘Has it gone to the north?’, ‘Has it gone to the south?’, etc., all share the false presupposition that the fire continues to exist. Likewise the questions about the arhat and the like all share the false presupposition that there is such a thing as a person who might either continue to exist after death, cease to exist at death, etc. The difficulty with these questions is not that they try to extend philosophical rationality beyond its legitimate domain, as the handmaiden of soteriologically useful practice. It is rather that they rest on a false presupposition—something that is disclosed through the employment of philosophical rationality. A different sort of challenge to the claim that the Buddha valued philosophical rationality for its own sake comes from the role played by authority in Buddhist soteriology. For instance, in the Buddhist tradition one sometimes encounters the claim that only enlightened persons such as the Buddha can know all the details of karmic causation. And to the extent that the moral rules are thought to be determined by the details of karmic causation, this might be taken to mean that our knowledge of the moral rules is dependent on the authority of the Buddha. Again, the subsequent development of Buddhist philosophy seems to have been constrained by the need to make theory compatible with certain key claims of the Buddha. For instance, one school developed an elaborate form of four-dimensionalism, not because of any deep dissatisfaction with presentism, but because they believed the non-existence of the past and the future to be incompatible with the Buddha's alleged ability to cognize past and future events. And some modern scholars go so far as to wonder whether non-self functions as anything more than a sort of linguistic taboo against the use of words like ‘I’ and ‘self’ in the Buddhist tradition (Collins 1982: 183). The suggestion is that just as in some other religious traditions the views of the founder or the statements of scripture trump all other considerations, including any views arrived at through the free exercise of rational inquiry, so in Buddhism as well there can be at best only a highly constrained arena for the deployment of philosophical rationality. Now it could be that while this is true of the tradition that developed out of the Buddha's teachings, the Buddha himself held the unfettered use of rationality in quite high esteem. This would seem to conflict with what he is represented as saying in response to the report that he arrived at his conclusions through reasoning and analysis alone: that such a report is libelous, since he possesses a number of superhuman cognitive powers (M I.68). But at least some scholars take this passage to be not the Buddha's own words but an expression of later devotionalist concerns (Gombrich 2009: 164). Indeed one does find a spirited discussion within the tradition concerning the question whether the Buddha is omniscient, a discussion that may well reflect competition between Buddhism and those Brahmanical schools that posit an omniscient creator. And at least for the most part the Buddhist tradition is careful not to attribute to the Buddha the sort of omniscience usually ascribed to an all-perfect being: the actual cognition, at any one time, of all truths. Instead a Buddha is said to be omniscient only in the much weaker sense of always having the ability to cognize any individual fact relevant to the soteriological project, viz. the details of their own past lives, the workings of the karmic causal laws, and whether a given individual's defilements have been extirpated. Moreover, these abilities are said to be ones that a Buddha acquires through a specific course of training, and thus ones that others may reasonably aspire to as well. The attitude of the later tradition seems to be that while one could discover the relevant facts on one's own, it would be more reasonable to take advantage of the fact that the Buddha has already done all the epistemic labor involved. When we arrive in a new town we could always find our final destination through trial and error, but it would make more sense to ask someone who already knows their way about. The Buddhist philosophical tradition grew out of earlier efforts to systematize the Buddha's teachings. Within a century or two of the death of the Buddha, exegetical differences led to debates concerning the Buddha's true intention on some matter, such as that between the Personalists and others over the status of the person. While the parties to these debates use many of the standard tools and techniques of philosophy, they were still circumscribed by the assumption that the Buddha's views on the matter at hand are authoritative. In time, however, the discussion widened to include interlocutors representing various Brahmanical systems. Since the latter did not take the Buddha's word as authoritative, Buddhist thinkers were required to defend their positions in other ways. The resulting debate (which continued for about nine centuries) touched on most of the topics now considered standard in metaphysics, epistemology and philosophy of language, and was characterized by considerable sophistication in philosophical methodology. What the Buddha would have thought of these developments we cannot say with any certainty. What we can say is that many Buddhists have believed that the unfettered exercise of philosophical rationality is quite consistent with his teachings. |[A]||Anguttara Nikāya: The Book of the Gradual Sayings, trans. F. L. Woodward & E. M. Hare, 5 volumes, Bristol: Pali Text Society, 1932–6.| |[D]||Dīgha Nikāya: The Long Discourses of the Buddha: A Translation of the Dīgha Nikāya, trans. Maurice Walshe, Boston: Wisdom Publications, 1987.| |[M]||Majjhima Nikāya: The Middle Length Discourses of the Buddha: A Translation of the Majjhima Nikaya, trans. Bhikkhu Nanamoli and Bhikkhu Bodhi, Boston: Wisdom Publications, 1995.| |[S]||Saṃyutta Nikāya: The Connected Discourses of the Buddha, trans. Bhikkhu Bodhi, Boston: Wisdom Publications, 2000.| - Albahari, Miri, 2006. Analytical Buddhism, Basingstoke: Palgrave Macmillan. - Albahari, Miri, 2014. ‘Insight Knowledge of No Self in Buddhism: An Epistemic Analysis,’ Philosophers' Imprint, 14(1): 1–30, available online. - Collins, Stephen, 1982. Selfless Persons, Cambridge: Cambridge University Press. - Gethin, Rupert, 1998. The Foundations of Buddhism, Oxford: Oxford University Press. - Gombrich, Richard F., 1996. How Buddhism Began, London: Athlone. - –––, 2009. What the Buddha Thought, London: Equinox. - Gowans, Christopher, 2003. Philosophy of the Buddha, London: Routledge. - Harvey, Peter, 1995. The Selfless Mind, Richmond, UK: Curzon. - Jayatilleke, K.N., 1963. Early Buddhist Theory of Knowledge, London: George Allen and Unwin. - Rahula, Walpola, 1967. What the Buddha Taught, 2nd ed., London: Unwin. - Ronkin, Noa, 2005. Early Buddhist Metaphysics, London: Routledge. - Ruegg, David Seyfort, 1977. ‘The Uses of the Four Positions of the Catuṣkoṭi and the Problem of the Description of Reality in Mahāyāna Buddhism,’ Journal of Indian Philosophy, 5: 1–71. - Siderits, Mark, 2007. Buddhism As Philosophy, Indianapolis: Hackett. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Filling the interior of a shape is a two-step process: First, tell the Graphics2D how to fill shapes with a call to setPaint(). This method accepts any object that implements the java.awt.Paint interface. The Graphics2D stores the Paint away as part of its state. When it comes time to fill a shape, Graphics2D will use the Paint to determine what colors should be used to fill the shape. The 2D API comes with three kinds of "canned" paints: solid colors, a linear color gradient, and a texture fill. You can add your own Paint implementations if you wish. Now you can tell Graphics2D to fill a shape by passing it to fill(). Paints are immutable, which means they can't be modified after they are created. The reason for this is to avoid funky behavior when rendering. Imagine, for example, if you wanted to fill a series of shapes with a solid color. First, you'd call setPaint() on the Graphics2D; then you would paint the shapes using fill(). But what if another part of your program changed the Paint that Graphics2D was using? The results might be quite bizarre. For this reason, objects that implement Paint should not allow themselves to be changed after they are created. Figure 4.2 shows the three types of painting supported by the 2D API. The figure contains three shapes: The ellipse is filled with a solid color. The rounded rectangle is filled with a color gradient. The arc is filled with a texture, built from van Gogh's Starry Night.
How INDOOR AIR COMFORT Find out more about the ways we identify, react to and cope with odors and stuffiness... THE PHYSIOLOGICAL ASPECT The human nose is a very sophisticated organ. Testing instruments are still being developed to try to replicate the way it evaluates air quality – some successful, others less so. This is partly due to incomplete understanding of the way the brain processes information, but also to the high sensitivity of the human nose. We inhale and exhale an average of 12,000 litres of air per day. Our ability to assess the quality of this air involves our sense of smell, and our ability to sense irritants. Within each human nostril, there are two types of nerve fibers – the olfactory tissue and the trigeminal nerve – with 30 million receptor cells across a surface area measuring just 5cm2. While the olfactory tissue senses smell, the trigeminal nerve endings sense irritant aspects of chemicals in the air. When they’re stimulated by pollutants, both nerve endings send information to the brain for interpretation, influencing both conscious perception (of bad smells, for example), but also unconscious mood and emotions. Building materials, furniture, human activities or bioeffluents from people and pets may all cause odors. Odors are generally generated by volatile molecules or a combination of different ones in such a concentration (Volatile Organic Compounds – or VOCs) that they can be detected by the human nose. Although there’s no evidence to prove that unpleasant odors are linked to adverse health effects, scientific research shows that they can cause mental distraction and may have a negative impact on mood and stress levels. We can all identify with the sensation of stuffiness in an indoor environment. And it’s often characterised by a headache and feeling of fatigue. This results from the overall pollution load. Many different pollutants – which include odorous VOCs, products of incomplete combustion and bioeffluents – are present in indoor air at very low concentrations. Combined, however, they adversely affect wellbeing. The impacts of pollution and chemicals on performance and productivity have been quantified in various scientific literatures and compiled in this booklet THE PHYSICAL ASPECT Over the last couple of centuries, industries have developed, mechanical services have been introduced into buildings, synthetic materials have been invented, motorised transport has become standard and human activity has densified around the world’s growing cities. The quality of air – both outdoors and indoors – has changed accordingly. The different indoor pollutants can be classified into two main categories: Physico-chemical pollutants – gases and vapours (inorganic and organic) and particulate matter, such as carbon dioxide, carbon monoxide, VOCs, particles, fi bres, ozone, etc. Biological pollutants – microbiological (dust) particles floating in the air that originate from viruses, bacteria, mold, mites, insects, birds, mammals and pollen. These include allergens, endotoxins and mold (which can be both allergenic and toxic). Products present in a building can emit substances (particles and/or gases) that originate from the product itself (primary emissions), that are caused by coming into contact with other products, or that arise during the in-use phase of the product itself (secondary emissions). Human exposure to indoor air pollutants is influenced by factors such as the ventilation rate within a building, air velocity, temperature, relative humidity, the activities taking place, and the frequency and duration of exposure. Eliminating risks at source or through good ventilation will help ensure the quality of the indoor air that we spend most of our lifetime breathing.
This preview shows pages 1–3. Sign up to view the full content. This preview has intentionally blurred sections. Sign up to view the full version.View Full Document Unformatted text preview: choudhury (fac489) – Post-lab 3 Chemical Formula – lyon – (51155) 1 This print-out should have 23 questions. Multiple-choice questions may continue on the next column or page – find all choices before answering. 001 10.0 points What is the name of Na 2 CO 3 · 10 H 2 O? 1. sodium carbon trioxide decahydrate 2. sodium carbonate decahydrate correct 3. sodium carbonate nonahydrate 4. sodium (II) carbonate decahydrate Explanation: Salt hydrates are named by naming the anhydrous salt formula unit and adding the name of the water of hydration. The name of the hydrate includes a prefix, indicating of the number of water molecules attached to the formula unit. 002 10.0 points What is the name of CoCl 2 • 2H 2 O? 1. cobalt (I) chloride dihydrate 2. cobalt (II) chloride dihydrate correct 3. copper (I) chloride dihydrate 4. copper (II) chloride dihydrate 5. cobalt chloride dihydrate Explanation: Write the name of the ionic compound then write the corresponding prefix, which indi- cates the number of attached water molecules, in front of the word “hydrate”. (See “naming hydrates” in the reading section of experiment 2.) 003 10.0 points An element X combines with oxygen to form a compound of formula XO 2 . If 24.0 grams of element X combines with exactly 16.0 grams of O to form this compound, what is the atomic weight of element X? 1. 16.0 amu 2. 24.0 amu 3. 12.0 amu 4. 284 amu 5. 48.0 amu correct Explanation: m X = 24.0 g m O = 16.0 g We know from the formula XO 2 that one mole of X combines with exactly 2 moles of oxygen atoms. We can convert 16 grams of oxygen to moles using the atomic weight of oxygen from the periodic table: ? mol O = 16 g O × 1 mol O 16 g O = 1 mol O 1 mol O would combine with exactly 1 / 2 mol of X: ? mol X = 1 mol O × 1 mol X 2 mol O = 0 . 5 mol X We were given that 16 g of O combines exactly with 24 g X. We calculated that 16 g of O (1 mol O) would combine with 0 . 5 mol X. From this we can conclude that 0 . 5 mol X has a mass of 24 g. We can calculate the grams per mole: ? molar mass X = 24 g X . 5 mol X = 48 g X mol X The molar mass of an element in g/mol is numerically equal to the atomic mass in amu. Therefore, the atomic mass of X is 48 amu. 004 10.0 points 14 g of Si reacts with 16 g of O to produce SiO 2 , the only product, with no reactant left over. How much SiO 2 would be produced from 39 g of Si and excess O? choudhury (fac489) – Post-lab 3 Chemical Formula – lyon – (51155) 2 1. 0.0120 g 2. 44.6 g 3. 83.6 g correct 4. 55 g 5. 73.1 g Explanation: m ini , Si = 14 g m ini , O = 16 g m Si = 39 g By the law of conservation of mass, 14 g of Si and 16 g of O would produce 30 g of SiO 2 , if no Si and O are remaining at the end of the reaction. By the law of definite propor- tions then, the mass ratio of product SiO... View Full Document This note was uploaded on 12/13/2011 for the course CH 204 taught by Professor Leytner during the Fall '08 term at University of Texas at Austin. - Fall '08
In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: Dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; Homogeneity (in space) implies conservation of momentum; and homogeneity in time implies conservation of energy. In the context of composite metals is an alloy. A blend of a metal with one or more metallic or nonmetallic materials is an alloy. The components of an alloy do not combine chemically but, rather, are very finely mixed. An alloy might be homogeneous or might contain small particles of components that can be viewed with a microscope. Brass is an example of an alloy, being a homogeneous mixture of copper and zinc. Another example is steel, which is an alloy of iron with carbon and possibly other metals. The purpose of alloying is to produce desired properties in a metal that naturally lacks them. Brass, for example, is harder than copper and has a more gold-like color. Steel is harder than iron and can even be made rust proof (stainless steel). Homogeneity, in another context plays a role in cosmology. From the perspective of 19th-century cosmology (and before), the universe was infinite, unchanging, homogeneous, and therefore filled with stars. However, German astronomer Heinrich Olbers asserted that if this were true, then the entire night sky would be filled with light and bright as day; this is known as Olbers' paradox. Olbers presented a technical paper in 1826 that attempted to answer this conundrum. The faulty premise, unknown in Olbers' time, was that the universe is not infinite, static, and homogeneous. The Big Bang cosmology replaced this model (expanding, finite, and inhomogeneous universe). However, modern astronomers supply reasonable explanations to answer this question. One of at least several explanations is that distant stars and galaxies are red shifted, which weakens their apparent light and makes the night sky dark. However, the weakening is not sufficient to actually explain Olbers' paradox. Many cosmologists think that the fact that the Universe is finite in time, that is that the Universe has not been around forever, is the solution to the paradox. The fact that the night sky is dark is thus an indication for the Big Bang. By translation invariance, one means independence of (absolute) position, especially when referring to a law of physics, or to the evolution of a physical system. Fundamental laws of physics should not (explicitly) depend on position in space. That would make them quite useless. In some sense, this is also linked to the requirement that experiments should be reproducible. This principle is true for all laws of mechanics (Newton's laws, etc.), electrodynamics, quantum mechanics, etc. In practice, this principle is usually violated, since one studies only a small subsystem of the universe, which of course "feels" the influence of rest of the universe. This situation gives rise to "external fields" (electric, magnetic, gravitational, etc.) which make the description of the evolution of the system depending on the position (potential wells, etc.). This only stems from the fact that the objects creating these external fields are not considered as (a "dynamical") part of the system. Translational invariance as described above is equivalent to shift invariance in system analysis, although here it is most commonly used in linear systems, whereas in physics the distinction is not usually made. The notion of isotropy, for properties independent of direction, is not a consequence of homogeneity. For example, a uniform electric field (i.e., which has the same strength and the same direction at each point) would be compatible with homogeneity (at each point physics will be the same), but not with isotropy, since the field singles out one "preferred" direction. In Lagrangian formalism, homogeneity in space implies conservation of momentum, and homogeneity in time implies conservation of energy. This is shown, using variational calculus, in standard textbooks like the classical reference [Landau & Lifshitz] cited below. This is a particular application of Noether's theorem. As said in the introduction, dimensional homogeneity is the quality of an equation having quantities of same units on both sides. A valid equation in physics must be homogeneous, since equality cannot apply between quantities of different nature. This can be used to spot errors in formula or calculations. For example, if one is calculating a speed, units must always combine to [length]/[time]; if one is calculating an energy, units must always combine to [mass]•[length]²/[time]², etc. For example, the following formulae could be valid expressions for some energy: if m is a mass, v and c are velocities, p is a momentum, h is Planck's constant, λ a length. On the other hand, if the units of the right hand side do not combine to [mass]•[length]2/[time]2, it cannot be a valid expression for some energy. Being homogeneous does not necessarily mean the equation will be true, since it does not take into account numerical factors. For example, E = m•v2 could be or could not be the correct formula for the energy of a particle of mass m traveling at speed v, and one cannot know if h•c/λ should be divided or multiplied by 2π. Nevertheless, this is a very powerful tool in finding characteristic units of a given problem, see dimensional analysis. Theoretical physicists tend to express everything in natural units given by constants of nature, for example by taking c = ħ = k = 1; once this is done, one partly loses the possibility of the above checking. - Rennie, Richard, Science Online (2003). Homogeneous (physics). The Facts On File Dictionary of Atomic and Nuclear Physics. Describing a material or system that has the same properties in any direction; i.e. uniform without irregularities.(accessed November 16, 2009). - Tanton, James. "homogeneous." Encyclopedia of Mathematics. New York: Facts On File, Inc., 2005. Science Online. Facts On File, Inc. "A polynomial in several variables p(x,y,z,…) is called homogeneous [...] more generally, a function of several variables f(x,y,z,…) is homogeneous [...] Identifying homogeneous functions can be helpful in solving differential equations [and] any formula that represents the mean of a set of numbers is required to be homogeneous. In physics, the term homogeneous describes a substance or an object whose properties do not vary with position. For example, an object of uniform density is sometimes described as homogeneous." James. homogeneous (math). (accessed: 2009-11-16) - Homogeneity. Merriam-webster.com - Homogeneous. Merriam-webster.com - Rosen, Joe. "Alloy." Encyclopedia of Physics. New York: Facts On File, Inc., 2004. Science Online. Facts On File, Inc. accessed 2009-11-16 - Todd, Deborah, and Joseph A. Angelo Jr. "Olbers, Heinrich Wilhelm Matthäus." A to Z of Scientists in Space and Astronomy. New York: Facts on File, Inc., 2005. Science Online. Facts On File, Inc. Olbers, Heinrich Wilhelm Matthäus (accessed 2009-11-16) - Landau - Lifschitz: "Theoretical Physics - I. Mechanics", Chapter One.
Is there life other than ours existing in the vast universe? How long have these alien civilizations existed, and can they communicate with us? These questions are summed up in the popular Drake equation, proposed by astronomer Frank Drake in 1961 and which outlines the variables necessary for a technologically inclined civilization out there to link up with us. Now, a new study suggests that with recent discoveries of exoplanets and by revisiting this constantly looming question, there could be a way to simplify this equation. The results show that alien civilizations were aplenty although likely extinct, but which could hold clues as to how humans could extend the very civilization they have. Lead researcher and University of Rochester professor Adam Frank said that the question behind the existence of advanced alien life has always been stuck with three huge uncertainties in the Drake equation, explained here in the SETI Institute website. According to Frank, while we already have for a long time estimated how many stars exist, what is uncertain is how many of those had planets potentially harboring life, how frequently life might evolve and spawn intelligent beings, and the length of such civilizations’ survival prior to extinction. “Thanks to NASA’s Kepler satellite and other searches, we now know that roughly one-fifth of stars have planets in ‘habitable zones,’ where temperatures could support life as we know it,” he said, deeming one of the three large uncertainties already addressed. Now, on the question of how long can civilizations survive, the researchers considered the question still very hard to answer. But they found a workaround through it by asking: “Are we the only technological species that has ever arisen?” As for the next question – the likelihood of advanced life arising on a planet – Frank and his co-author Woodruff Sullivan of the University of Washington imagined a university where humanity on Earth is the only surviving one. Applying this to the number of known stars, they saw a probability of one in 10 billion trillion, or just one in 60 billion if in the Milky Way alone. Dubbing their results the “archeological form” of the equation, their own equation multiplies the words “Nast” and “fbt” – the former refers to the number of livable planets in a given volume of the universe, while the latter pertains to the chance of a technological species emerging on one of these planets. Frank considered one in 10 billion trillion a very small chance of humanity being alone in the universe. But he noted that with the universe’s vast distances as well as the uncertainty behind the length of civilizations, it may be impossible to communicate with any living entity out there. “[O]ther intelligent, technology producing species is very likely have evolved before us,” Frank explained, expanding that even one chance in a trillion implies that humanity on Earth has happened about 10 billion other times over cosmic history. Unless a civilization lasts much longer than Earth’s (10,000 years old) over the 13-billion-year existence of the universe, the others have likely become extinct. Worry not, though, as these findings could hold practical benefits: helping keep humans around longer. With evolution probably happening many times previously, humanity could explore the matter of survival using simulations in order to know what promotes or prevents long-lived civilizations. The findings are discussed in the journal Astrobiology.
vertebrate, any animal having a backbone or spinal column. Verbrates can be traced back to the Silurian period. In the adults of nearly all forms the backbone consists of a series of vertebrae. All vertebrates belong to the subphylum Vertebrata of the phylum Chordata. There are five classes of vertebrates: fish, amphibians, reptiles, birds, and mammals. General characteristics of vertebrate animals include their comparatively large size, the high degree of specialization of parts they exhibit, their bilaterally symmetrical structure, and their wide distribution over the earth. In addition to an internal skeleton of bone and cartilage or of cartilage alone, vertebrates have a spinal cord, a brain enclosed in a cranium, a closed circulatory system, and a heart divided into two, three, or four chambers. Most have two pairs of appendages that are variously modified as fins, limbs, or wings in the different classes. All animals without backbones are called invertebrates; these do not form a homogeneous group as do vertebrates. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
First-order logic is symbolized reasoning in which each sentence, or statement, is broken down into a subject and a predicate. The predicate modifies or defines the properties of the subject. In first-order logic, a predicate can only refer to a single subject. First-order logic is also known as first-order predicate calculus or first-order functional calculus. A sentence in first-order logic is written in the form Px or P(x), where P is the predicate and x is the subject, represented as a variable. Complete sentences are logically combined and manipulated according to the same rules as those used in Boolean algebra. In first-order logic, a sentence can be structured using the universal quantifier (symbolized ) or the existential quantifier ( ). Consider a subject that is a variable represented by x. Let A be a predicate "is an apple," F be a predicate "is a fruit," S be a predicate "is sour"', and M be a predicate "is mushy." Then we can say x : Ax Fx which translates to "For all x, if x is an apple, then x is a fruit." We can also say such things as x : Fx Ax x : Ax Sx x : Ax Mx where the existential quantifier translates as "For some." First-order logic can be useful in the creation of computer programs. It is also of interest to researchers in artificial intelligence ( AI ). There are more powerful forms of logic, but first-order logic is adequate for most everyday reasoning. The Incompleteness Theorem , proven in 1930, demonstrates that first-order logic is in general undecidable. That means there exist statements in this logic form that, under certain conditions, cannot be proven either true or false. Also see Mathematical Symbols .
has become popular throughout the world as a like-for-like alternative to petroleum and diesel. Since it is made from organic fats and oils – usually waste oils, animal fats, or restaurant grease – it is relatively clean and non-toxic. It is often touted as the ‘clean’ alternative to using petroleum, as it produces less greenhouse gases and other toxic pollutants. Since biodiesel can be produced locally and used in diesel engines with little or no modifications, it has helped some countries reduce their dependence on foreign fuel imports¹. Advantages of Biodiesel It is Renewable – Since it is produced from organic materials – plants and animals – instead of from fossil fuels, biodiesel is much more renewable than its petroleum based counterparts. This means that it could potentially become a longer term replacement for fossil fuels if a viable alternative isn’t found². Its Manufacture is an Effective Recycling Method – Not only are biofuels produced from renewable, organic materials, but they can also be made from a huge range of these materials. They can be manufactured using anything from crop waste to feedlot manure, and are even an effective way of recycling used restaurant and cafe oils³. Economic Stimulation – Biofuel production can provide hundreds or even thousands of jobs in rural or remote areas. Since biofuels are produced locally, it is the local community who benefits, as opposed to fossil fuels which are usually produced offshore or in foreign countries by multinational corporations³. They Burn Cleanly – One of the major characterising factors which separate biodiesels from fossil fuels is the way in which they burn. They produce less carbon emissions than traditional fuels, which makes them cleaner and more environmentally friendly. They also produce no sulfur (as long as a 100% biodiesel is used), which improves air quality and actually increases the lifespan of diesel engines³. Disadvantages of Biodiesel Biofuel Crops Compete With Food Crops – Since biofuels are produced from organic products, often corn or soybeans, they can compete with food production. This can lead to increased food prices, and even food shortages in poorer areas of the world. To fully harness the potential of biofuels, we need to be able to grow crops for food, and use the waste products for biofuel production⁴. Deforestation – One of the best biofuel sources in the world is palm oil. Yes, the nasty, environmentally destructive, palm oil. When the demand for biofuels began to increase at the end of the 90’s, people began to realise that palm oil was a great material to use to produce biofuels. However, they didn’t consider the environmental issues and drawbacks of producing palm oil in Indonesia and shipping it to Europe. Not only were forest cleared and burnt to make way for palm oil plantations, but a huge amount of fossil fuels was burnt in doing so – defeating the entire purpose of using biodiesels⁴. They Can’t Be Used in Cold Areas – This is one of the major drawbacks of biofuel use. If it gets too cold, then the fuel will begin to solidify inside the fuel tank and engine, meaning you won’t be able to drive anywhere until it warms up. The temperature of congelation will depend on the product that the biodiesel is made from, but can actually be relatively high. However, it can still be used in winter if you mix it with some sort of winterised diesel, which will help it remain a liquid¹. Increased Nitrogen Oxide Emissions – While biodiesels are cleaner than fossil fuels on average, they do tend to produce slightly more nitrogen oxide (about 10% more). This causes increased pollution around big cities and fuel use centers, and contributes to the formation of smog and acid rain¹.
A Unit on American Folklore, by Edward H. Fitzpatrick Guide Entry to 78.03.08: Through the study of folklore, students can gain understanding of one of life’s most painful processes—the gradual loss of independence on the part of the individual. This unit is unusual in that it does not focus on a single idea, but rather attempts to outline a year-long program of study in folklore (with emphasis on black folklore), where the common thread was and is tradition. Two segments of folklore are examined in detail in the narrative: material culture (what we can touch and feel) and music (what we can hear). James Thomas, a rural Mississippi musician, artist, and tale teller, a man with little education and no formal training, becomes the model for how tradition combines with talent and experience to create an individual of tremendous importance to his community. Three sources of influence (inspiration) for Thomas and, by extension, for students, serve as a link between the formal world and the folk world: memory/imagination, dreams and visions, and the media. The second half of the narrative focuses in on the background and significance of the ballad (especially Mississippi Delta Blues) in this country. An extensive bibliography includes resources available in prose, music, or film. (Recommended for students in grades 7-12; the unit is appropriate for English and Social Studies classes.)
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2011 July 8 Explanation: These tantalizing panoramas follow a remarkable giant storm encircling the northern hemisphere of ringed planet Saturn. Still active, the roiling storm clouds were captured in near-infrared images recorded by the Cassini spacecraft on February 26 and stitched into the high resolution, false-color mosaics. Seen late last year as a prominent bright spot by amateur astronomers when Saturn rose in predawn skies, the powerful storm has grown to enormous proportions. Its north-south extent is nearly 15,000 kilometers and it now stretches completely around the gas giant's northern hemisphere some 300,000 kilometers. Taken about one Saturn day (11 hours) apart, the panoramas show the head of the storm at the left and cover about 150 degrees in longitude. Also a source of radio noise from lightning, the intense storm may be related to seasonal changes as Saturn experiences northern hemisphere spring. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
Presentation on theme: "Random Variables Lesson chapter 16 part A Expected value and standard deviation for random variables."— Presentation transcript: Random Variables Lesson chapter 16 part A Expected value and standard deviation for random variables What is a Random Event: Picking a card from a deck of cards: Tossing a coin: Etc... Can you think of any others? (pg 1.2 list 3 others ) Picking a student in a class looking for Jan birthdays: (1.4) An event with a list of outcomes where the outcome of the event is random What is a Random Variable: Defined as a capital X that we never solve for. A function that maps for us a random event to numerical outcomes, and the total probability of all outcomes is 1 Or a function that maps a sample space into real numbers. Picking a card from a deck of cards: (pg 1.3-1.4) Assign a value to the cards - $0.25 for face cards, $0.5 for aces, 0 for others Tossing a coin: (1.5-1.6) $2.00 for a head and $-1.00 for a tail Convert the random events you came up with to random variables? (pg 1.9 – 3 questions) Picking a student in a class looking for Jan birthdays (1.7-1.8) Jan b-day = 12 points versus non-Jan b-day = 5 points Now lets look at the math behind random variables – Finding the average outcome of a random variable or what you can expect to win! What is the average of a random variable? 9, 6, 5, 3, 3, 3, 6, 9, 2, 4 What is the average of this set of numbers? (pg 2.1) Ok, now lets do it a different way: Lets look at the formula for finding the average: 9+6+5+3+3+3+6+9+2+4 10 9, 6, 5, 3, 3, 3, 6, 9, 2, 4 Lets do it a different way: (pg 2.2a) (2+3+3+3+4+5+6+6+9+9) /10 Use algebra: (pg 2.2b) (1x2 + 3x3 + 1x4 + 1x5 + 2x6 + 2x9) /10 Break it apart: (pg 2.3) Rewrite it: (pg 2.4) Change to percents: 10%(2) + 30%(3) + 10%(4) + 10%(5) + 20%(6) + 20%(9) This formula is called the Expected Value!! If you select a face card you get $0.50, if you get any other card you get nothing. It costs $0.25 play. Is it worth playing? (what is the average pay out?) Create a Probability Model for the game. OutcomesFace CardsAll OthersTo Play Value - x P(X = x) $ 0.5$ 0.0$ -0.25 12 / 5240 / 52 You can answer 3 different questions here: 1- what is the average payout of the game? 2- what is the average winning of the game? 3- what is the average cost of the game? (pg 3.2) (pg 3.3) (pg 3.4) (pg 3.5 answer all 3 questions) Expected Value (mean) Over “the long haul” how much to you expect to pay/get? Can the expected value can be modeled with BoB!?! YES! Now you can find the probability of a random variable’s payout. Expected Value Credit unions often offer life insurance on their members. The general policy pays $1000 for a death and $500 for a disability. What is the expected value for the policy? The payout to a policyholder is the random variable and the expected value is the average payout per policy. To find E(X) create a probability model (a table with all possible outcomes and their probabilities). (pg 4.1-4.2) Probability Model Credit unions often offer life insurance on their members. The general policy pays $1000 for a death and $500 for a disability. What is the expected value for the policy? Out comePayout xP(X=x) Death Disability Neither 1,000 500 0 1 / 1000 2 / 1000 997 / 1000 Class Practice On a multiple-choice test, a student is given five possible answers for each question. The student receives 1 point for a correct answer and loses ¼ point for an incorrect answer. If the student has no idea of the correct answer for a particular question and merely guesses, what is the student’s expected gain or loss on the question? Suppose also that on one of the questions you can eliminate two of the five answers as being wrong. If you guess at one of the remaining three answers, what is your expected gain or loss on the question? (pg 5.1 - 5.5)
Welcome to Victorians.co.uk, your one stop learning resource for all things Victorian! On this site you will find information spanning over all topics and fun filled facts about this important era in history. Whether you are studying at school for your GCSEs, A Levels or simply looking to learn more about Victorians, you will find all the information you need here on this site! The Victorian Era The English Victorian Age included the years 1837 – 1901. These years marked the rule of Queen Victoria, England’s longest-reigning monarch. There were significant social and cultural problems at the beginning of this period, but many changes and improvements were made by the end of Queen Victoria’s reign. England’s Industrial Revolution took place during the 1800’s. The invention of the steam-powered engine led to the growth of factories, mainly textile mills. At the beginning of Queen Victoria’s reign, England was largely a rural and agricultural society. With the emergence of factories in the larger towns, many people moved from the country to the cities in search of a better way of life. Fifty percent of the British population lived in London by 1851. The working class and poor were exposed to unbearably crowded and unhealthy conditions. Entire families lived in one or two room dwellings. The towns were dirty, with garbage typically thrown into the streets and large clouds of factory coal steam. Outbreaks of cholera, typhus, smallpox, dysentery, scarlet fever, measles and polio spread quickly. Factory owners built cheap, unsafe housing for their workers. The working class residential areas were crowded, with narrow streets and houses that were built back to back. Workhouses, factory housing for homeless, impoverished families and orphaned children, were common. These families and children worked for the factories in exchange for housing and food. Child labor was common at the beginning of the Industrial Revolution. Children as young as four and five years old worked in dangerous conditions in the factories, coal mines and as chimney sweeps. Many children suffered from accidents involving factory machinery. In the coal mines, children commonly worked as coal bearer and trappers. A coal bearer cut and carried loads of coal. As trappers, the children sat in a dug out hole and held a string tied to the coal mine door. When a coal wagon approached, they pulled the string to open the door. Wealthy Victorian Age English children were taught by tutors and governesses, and older boys were sent to boarding schools. The children of working class and poor families, however, attended Dame Schools and day or charity schools. Dame Schools were run by women in their homes. Most schools were poorly built, with bare walls and curtains as room dividers. Many of the teachers were in ill health and unable to work anywhere else. A large number of the parents fought against their children attending school, wanting them to work instead. Eventually, in response to the poor living conditions of the working class and poor segment of the Victorian Age English population, Parliament passed laws in 1848 that allowed the city councils to improve the streets and housing of England. Proper sewers and drains were installed, and factory houses were required to meet set standards of construction. Streets were cleaned and paved, and gas lighting was added. In an attempt to rid the cities of slums, in 1875 these unsafe and unhealthy areas were torn down and more acceptable houses were built. Although this did improve the overall living conditions of the towns, many of the slum residents could not afford the nicer housing and had nowhere to go once the slums were torn down. This caused an influx of people to the impoverished areas that still existed.
The Mayflower Compact contained the laws and regulations that helped the settlers live together. It allowed for political parties and it guaranteed the right of religious freedom. The Mayflower Compact also created the first step towards democracy. It is often referred to a prelude to the Constitution of the United States. Some of the people that signed the Mayflower Compact are John Alden, Governor William Bradford, Miles Standish and Edward Winslow.
inclusion include students with disabilities and purpose of this study was to research and observe the implications of No Child Left Behind (NCLB) and its effect on middle school students. The question “Does inclusion include students with disabilities and teachers?” is very important because as we attempt to implement the mandates of NCLB we are faced with many challenges. These challenges affect the instruction of our students and the teacher’s ability to help these students meet success in the inclusive setting. study came about after my school district decided to integrate resource students into an inclusive setting. This reorganized our special education department, our school, and the instructional strategies used in the classroom. I focused my study on students who were once a part of the resource team but are now in the inclusive setting. I interviewed these students, talked to their current teachers, and researched current and past grades in the core subjects. I also took a look at the teachers and their reaction to this change. Through interviews and observations, I learned of their years of experience and different methods of instruction used in the classroom. I began to collect data and researched the information, I found that students’ motivation and the teacher’s ability to keep students actively engaged was the key. Students who had succeeded in the resource setting had also succeeded in the inclusion setting. Their success was internal, and the teacher’s ability to keep the students engaged throughout the lesson kept the students motivated. Teachers, on the other hand, were not really afraid to teach students with disabilities but were concerned that they were not adequately trained to handle the specialized needs of special education students. the interviews with the students, I was able to learn more about their feelings about inclusion and special education in general. Some students were elated to be off the resource team because of the stigma that was attached to special education. Some of the students not succeeding were motivated but had home issues that inhibited their ability to meet success in the classroom. Left Behind has mandated that school districts and states that receive federal funds implement changes in order to increase the academic achievement of all students but especially minorities and students with disabilities. In light of that mandate, I have recommended some steps that I believe need to be implemented in the schools. - States and school districts must replace proficiency targets based on actual school achievement. must measure students’ academic growth in relation to their previous academic progress. - We must use multiple indicators to assess student should reflect the curriculum being taught in makers must ensure that assessments are aligned to state standards, are valid and reliable, and assess higher-order thinking. should decrease the amount of assessment given National Education Association. Delaware Secondary Education Act. Nov./Dec. 2004, p. 15.
The Power of Language A teacher’s language is a powerful teaching tool. Our language can build children up or tear them down. It can model respectful and caring social interactions or just the opposite. Effective language encourages and supports students in their learning, rather than criticizing them for their mistakes. As child psychologist Rudolf Dreikurs writes, “Each child needs continuous encouragement just as a plant needs water.” Effective teacher language also: - Is clear, simple, and direct - Is genuine and respectful - Gives specific positive feedback rather than general praise - Focuses on the child’s action or behavior rather than generalizing about the child’s whole person - Avoids qualitative or personal judgment - Shows faith in children’s abilities and potential Improving teacher language is an ongoing challenge for many teachers, especially language related to classroom management and discipline issues. But teachers can successfully meet this challenge with time and practice. Here are some guidelines to keep in mind when working on improving your use of language in these areas: Be direct; don’t use praise to manipulate It’s easy to say we need to avoid manipulative language. It’s much harder to actually do it when we’re trying to keep a group of twenty-five students safe and orderly. To break out of the manipulation mode, be direct when you want children to do something. Instead of “I see Josh has finished cleaning up his table,” use the established signal to get everyone’s attention, then say “Time to finish cleaning up and get in line. One minute to go.” Even “I see four people ready . . . I see half of us ready . . . I see everyone ready” is better than “I see Josh is ready.” If you truly want to acknowledge Josh for being so efficient and thorough, make your comment to him at another time, directly and privately. (For example, “Josh, I noticed you cleaned up quickly and thoroughly after art today.”) Pay attention to the small things Students will be most receptive to your words when the classroom is calm and in control. So, at the first hint that the noise is beginning to rise above a productive level, ring the bell and remind students to use softer voices. Or when you notice a group about to get off task, step in and say “Remind us what you’re supposed to be doing right now.” If you wait until the noise level has become raucous or until the group has been off task for ten minutes, your words will have less impact. Keep it simple and clear Children are masterful at tuning out adults, especially those who go on and on. When we talk too much, children get confused and overwhelmed; eventually, they stop listening. The most effective teacher language is simple and clear. Say what you mean and say it concisely. If you know that students understand the rules, a single phrase or directive is all that’s needed as a reminder. Instead of “Class, remember how we talked about how hard it is to hear each other when everyone is calling out at once. It’s really important that you raise your hand if you have something to say. All of your ideas are important and I want everyone to be heard,” try “Meeting rules” or “Remember to raise your hand to speak.” Be firm when needed Too often we confuse being firm with being mean. And in an effort to avoid being mean, we shy away from being firm. As a result, students grow uncertain about limits, and we lose our authority to establish them. Students follow the rules when they feel like it; we enforce them when it’s easy to do so. Generally, this creates an atmosphere of confusion and anxiety. A simple guideline to keep in mind is “If you mean no, then say no.” No hedging, no beating around the bush. Say “No, you may not use the materials in that closet” instead of “I’d rather you didn’t use the materials in that closet, okay?” Also, it’s important not to ask a question when you mean to give a command. For instance, you want children to put their brushes down and look at you. Instead of “Could you please put your brushes down and look at me?” try “Put your brushes down and look at me.” The tone of voice is direct and firm, not harsh or sarcastic. It does not put children down, scold, or pass judgment. Instead, it lets children know exactly what you expect from them. Expect the best Research on the relationship between teachers’ expectations and children’s academic performance has shown that if a teacher believes a child will succeed, the child has a greater chance of doing so than if the teacher believes the child will fail. The same holds true for children’s behavior. Most children will try to live up to adult expectations. If we expect that children will be respectful and responsible, they will strive to be. If we expect that children will be disrespectful and irresponsible, then that’s what they most likely will be. Language is one key way we communicate our expectations. Through our language we let children know that we have confidence in their ability to meet high expectations. We tell them, even when things have gone awry, “I believe you can do this. Now show me.” Here are two examples of language that effectively communicates expectations: - Two children are arguing over who gets to use the hole puncher first. Rather than solving the problem for them or taking away the hole puncher, the teacher says, “I know the two of you can figure out a fair way to solve this problem. I’ll give you two minutes. Let me know what you decide.” - Students are waiting in line to go to lunch. There is a lot of poking, pushing, and cutting going on. The teacher rings the bell, then focuses on what the students can do right. She says, “You all know what to do when you’re waiting in line. When I ring the bell again I expect you to do it.” There are many situations in the course of a school day when inviting cooperation is what’s most appropriate. Teachers can do this by creating group challenges, offering choices, or just bringing a playful spirit to the task at hand. Here are some examples of inviting cooperation: - It’s time to clean up. The teacher rings the bell and says, “Here’s a challenge. Let’s see if we can do a thorough job of cleaning up the entire room in less than two minutes. If you finish your area early, you can help clean another area. The two minutes start now. - During writing period, there are many side conversations and several students are wandering about the room. The teacher rings the bell and says, “I see lots of people having a hard time concentrating. This writing work needs to get done. You can choose to focus on it for the next twenty minutes or you can do it this afternoon instead of choice time. Your decision.” Pay attention to tone, volume, and body language Consider the many different ways of saying “Come over here and sit down, Danny.” The tone could be neutral, loaded with exasperation, or sound more like a plea than a directive. It could be said in a whisper (for only Danny to hear), in a medium volume, or in an all-out scream. Most children are keenly aware of the subtle and not-so-subtle alterations in meaning caused by tone and volume. While we may not always be able to control the negative tone that slyly slips in or the raised volume that makes a directive sound more like a threat, we can continue to pay attention to our tone and volume and strive to match them to the message we want to send. Students also pay attention to other nonverbal cues such as that powerful language known as a teacher’s “looks.” We often need to use our eyes as silent reminders to children to stay on track—that “No, that’s not okay” look or “Come on, stay with us” look. But it’s important to be aware of how powerful these signals can be. There is often a fine line between the reminding or redirecting look and the “dirty” look. Keep your sense of humor A teacher might give literally hundreds of reminders and redirections in the course of a normal school day. If you’re beginning to feel like a broken record, it might be time to infuse some humor into the situation, as in the following example: A teacher has stepped out of the classroom for a few minutes to speak to the principal, leaving the class in the care of an instructional assistant. When the teacher returns, the classroom is noisy and chaotic. He turns off the light, signaling students to stop what they’re doing and look at him, then says, “This couldn’t possibly be the same class that I left a few minutes ago. I think we need some magic to get the real class back. I’m going to close my eyes for a minute. When I say ‘poof’ I want the classroom to magically change back to how it was when I left.” A conscious process Often teachers who want to change their language go through a conscious process. Some of the strategies that seem to help are tape recording and listening to yourself for a short period of time; having a colleague observe you for fifteen minutes and record the words and phrases you use most frequently; focusing on changing one phrase at a time; and pausing before speaking to give yourself a chance to think. Some teachers also post a list of desirable words and phrases in their classroom for easy reference. Through all of this, remember that change takes time. Be patient with yourself and celebrate the incremental improvements you make along the way. Adapted from Rules in School, a Responsive Classroom Strategies for Teachers series book - Rules in School Learn a positive approach for helping students become invested in creating and living by classroom rules. Includes information about effective teacher language and examples of teacher language at different grade levels. - Teaching Children to Care: Classroom Management for Ethical and Academic Growth, K–8 This definitive work about classroom management includes chapters on the importance of teacher language and tone.
Innovations in cancer therapy hold a remarkable potential to transform the treatment of the disease. According to the World Health Organization, cancer causes nearly one out of every six deaths globally. In the coming decades, new cancer diagnoses are expected to increase by approximately 70 percent. Fortunately, new developments in knowledge, technology, and precision medicine continue to materialize at lightning speed, paving a way for better prevention, detection, and disease management for patients. 1. Immunotherapy: Defense System Tumors have a knack for adapting to their surroundings and evading detection as abnormal objects. With immunotherapy, the body’s immune system is trained to better fight cancer. Laboratory scientists study a patient’s tumor to determine biomarkers that indicate whether a specific immunotherapy could succeed at waking the immune system’s attack response. 2. Radioactive Elements Brachytherapy involves planting radioactive pellets at the location of a malignant tumor. This method is used to treat cancer of the cervix, prostate, brain, and other parts of the body. Traditionally, this procedure uses radioactive elements such as palladium or iodine. In one research, 13 patients received cesium-131 brachytherapy after previous radiation therapies failed to stop the spread of their brain cancer. Cesium is the 55th element on the periodic table. According to the April 2017 Journal of Neurosurgery study, the implants were able to control the tumor and limit the risk of side effects. Cesium-131 brachytherapy implants provided less damage to healthy brain tissue than other radiation treatments like Gamma Knife and CyberKnife, says study co-author Dr. Theodore Schwartz, a neurosurgery expert at the New York-Presbyterian and Weill Cornell Medicine. Because of its promising results, the researchers are urging more trials to further analyze Cesium-131’s efficacy. 3. Towards An AI Future In 2017, Microsoft announced their project of building a laboratory and having a team of computer scientists and researchers dedicated to solving cancer. Machine learning and artificial intelligence are at the core of its premise. The company aims to use natural language processing and machine learning to create individualized cancer treatments. Computers will help identify tumor progressions and create algorithms that scientists can use to understand how the disease develops – and how to fight it. With all the information they can gather, they might be able to find a way to program cells that fight cancer directly. From programming computers to biology, the team is working to unlock more applications for better treatments. 4. Blood Tests 2.0 Liquid biopsies check for mutations and other shifts in DNA shed from tumors in the blood. These give insights into the earliest signs of cancer. In their early 2018 report, Research and Markets estimated the global liquid biopsy market to surpass the $5 billion mark by the end of 2023. North America currently has the biggest market for the liquid biopsy industry, and in 2016, the U.S. FDA approved the first liquid biopsy test for cancer testing. This revolutionary non-invasive procedure has great potential as an instrument for routine cancer testing and an alternative to tumor biopsy. 5. Intraoperative Radiation For Breast Cancer If you were to get treatment for breast cancer, would you prefer surgery and several weeks of daily hospital visits for radiation therapy, or a single dose of radiation administered during the surgery? Intraoperative radiation treatment allows a properly selected woman to get all of her local regional treatment with one trip to the operating room. However, the patient must meet the tight criteria in terms of age, size and type of cancer, whether it is a single or multiple tumors, and if it can respond to hormone therapy drugs. 6. Nano Devices, Big Changes Nanoparticles are 100-to-10,000 times tinier than human cells. As such, these devices able to explore many areas of the body and detect disease. They can also deliver medication and other types of treatments. In 2017, scientists at Rutgers University-New Brunswick designed a highly-effective method to identify small tumors using light-emitting nanoscale instruments. This can help the medical community in finding cancer at its early stages and create more targeted treatments. This new technology involves microscopic optical devices called nanoprobes, which release short-wave infrared light as they move through the bloodstream. They are able to show clearer results than magnetic resonance imaging (MRI) and other cancer surveillance procedures, according to a 2017 report published in Nature Biomedical Engineering. Vidya Ganapathy, an assistant research professor at Rutgers’ Department of Biomedical Engineering, says that the probe follows cancer cells wherever they go, even in the smallest niches in the body. This helps doctors to treat tumors intelligently because now they can see the address of the cancer. 7. At-Home Genetic Testing Consumers can now use home kits to learn whether they have three BRCA gene mutations linked to ovarian, breast, and possibly other types of cancer. The FDA has recently given the go-ahead for consumer BRCA testing, for which users provide a saliva sample to be analyzed by the company 23andMe. Since there are more than 1,000 known mutations that exist, results from the kits do not provide a comprehensive output. But, this isn’t all for making treatment decisions. Hopefully, the direct-to-consumer genetic testing will be able to start a discussion and help users explore their options for further screening. Consumers should always consult with their healthcare provider whether to explore more thorough testing. 8. Precision Cancer Medicine In contrast to one-size-fits-all treatment, precision medicine tackles personalization according to the individual needs of the patient. It takes the patient’s medical history, genetic makeup, test results, environment, and lifestyle into account. The ability to pinpoint genetic mutations in a tumor by genomic sequencing allows oncologists to use targeted therapies appropriately. The anti-cancer progress is a terrific trend. Even though cancer diagnoses are expected to rise, cancer mortality rates are also dropping dramatically thanks to emerging technologies. Better prevention, earlier detection, and an explosion of improved treatments and genetic knowledge all play a part. At New Hope Unlimited, we take part in providing individualized treatment with the least side effects and empower people who are fighting cancer. Call us at (480) 473-9808 to know your options.
17.2.7 2D Kernel Density The 2D Kernel Density plot is a smoothed color density representation of the scatterplot, based on kernel density estimation, a nonparametric technique for probability density functions. The goal of density estimation is to take a finite sample of data and to infer the underyling probability density function everywhere, including where no data point are presented. In kernel density estimation, the contribution of each data point is smoothed out from a single point into a region of vicinity. These smoothed density plot shows an average trend for the scatter plot. Creating 2D Kernel Density Plot To create a 2D Kernel Density plot: - Highlight one Y column. - Open 2D Kernel Density plot dialog by clicking Plot > Contour: 2D Kernel Density. - In the plot_kde2 dialog box, specify the Method , Number of Grid Points in X/Y and the Number of Points to Display, and Plot Type. - Click OK to create a 2D Kernel Density plot. The Dialog of plot_kde2 Specify the input data. - Bandwidth Method - Specify the bandwidth calculation method of the 2D Kernel Density plot. - Bivariate Kernel Density Estimator - Rules of Thumb - Density Method - Specify a method to calculate the kernel density for defined XY grids. - Choose the option to calculate density values according to the Ks2density equation. For a large dataset, computation of the exact computation may require extensive calculation, - Binned Approximate Estimation - Choose the option to calculate approximation of density values. This option is recommended for a large sample. - Number of Points to Display - Specify the first N lowest density points to be superimposed on the density image. - Interpolate Density Points - Specify the calculation method to decide which points to superimposed on the density image (see details in below Algorithm section). Usually if the number of source data is large (ie. >50000), we strongly recommend to select this option to improve the speed. - Number of Grid Points in X/Y - Specify the number of equally spaced grid points for the density estimation. - Number of Points to Display - Specify the first N lowest density points to be superimposed on the density image when the checkbox of All is unchecked. Otherwise, it will display all points when the All checkbox is selected by default. - Grid Range - As an interim step, a matrix of gridded values is generated from the X/Y data and the kernel density plot is created from the matrix values. By default, the Grid Range registers the minimum and maximum X and Y values in that matrix. Clear the Auto box to enter a value manually. - X Minimum - X Maximum - Y Minimum - Y Maximum - Plot Type - Specify the plot type. - Use the density matrix to plot contour - Use the density matrix to make an image plot Density Estimation data This determines where the calculated data for the graph is stored. This determines where the data of the displayed scatter plot is stored. Only available when Number of Points to Display is not 0. Kernel density estimation is a nonparametric technique to estimate density of scatter points. The goal of density estimation is to estimate underlying probability density function everywhere, including where no data are observed, from the existing scatter points. A kernel function is created with the datum at its center – this ensures that the kernel is symmetric about the datum. Kernel density estimation smooths the contribution of data points to give overall picture of the density of data points. Density Calculation Method Specify a method to calculate the kernel density for defined xy grids. Density values are calculated based on the equation below where n is the number of elements in vector vX or vY, is ith element in vector vX and is ith element in vector vY. and is the optimal bandwidths values. Binned Approximate Estimation Speed up the density calculation by an approximation to the exact estimation of 2D kernel density. First 2D binning is performed on the (x, y) points to obtain a matrix with the bin counts. Then 2D Fast Fourier Transform is utilized to perform discrete convolutions for calculating density values of each grid. 4th root of density values is calculated to map the density scale to the color scale Bivariate Kernel Density Estimator Calculate bandwidth based on linear diffusion process. Rule of Thumb The estimation of wx and wy simply can be calculated by: where n is the size of vector vX or vY, is the sample standard variation for dataset vX, and for dataset vY accordingly. Interpolate Density Points Specify the calculation method to decide which points to superimpose on the density image. If the option is selected, kernel density of points are calculated by the interpolation on the density matrix for defined XY grids. If number of source data is very large, selecting the option can greatly improve the speed. If the option is not selected, the density values will be calculated by the Exact Estimation method.
- 1 How do farmers prevent soil erosion? - 2 What are three ways to prevent soil erosion? - 3 What are the methods of controlling erosion? - 4 What are the five causes of erosion? - 5 What are 5 ways to prevent erosion? - 6 How can we prevent soil erosion Class 9? - 7 How can soil erosion prevent 5 points? - 8 What is an example of erosion control? - 9 What are the 4 types of erosion? - 10 What are the two methods of soil erosion? - 11 What are 3 things that cause erosion? - 12 What are the 6 major causes of erosion? - 13 What are some examples of erosion? How do farmers prevent soil erosion? Planting Vegetation as ground cover: Farmers plant trees and grass to cover and bind the soil. Plants prevent wind and water erosion by covering the soil and binding the soil with their roots. The best choice of plants to prevent soil erosion are herbs, wild flowers and small trees. What are three ways to prevent soil erosion? Method to prevent soil erosion Plantation of trees and plants. Mulch matting can be used to reduce erosion on the slopes. Put a series of fibre logs to prevent any water or soil from washing away. A wall at the base of the slope can help in preventing the soil from eroding. What are the methods of controlling erosion? How to control soil erosion - COVER methods. - Cover crops and green manures. - Green manures – also usually legumes – are planted specially to improve soil fertility by returning fresh leafy material to the soil. - Mixed cropping and inter-cropping. - Early planting. - Crop residues. What are the five causes of erosion? 5 Common Causes of Soil Erosion - Water. Water is very effective at doing work. - Wind. Although wind usually erodes soil more slowly than water, Florida does have an active hurricane season from June to November. - Recreational Activities. What are 5 ways to prevent erosion? 5 Steps for Erosion Control on Steep Slopes and Embankments - Plant Grass and Shrubs. Grass and shrubs are very effective at stopping soil erosion. - Use Erosion Control Blankets to Add Vegetation to Slopes. - Build Terraces. - Create Diversions to Help Drainage. How can we prevent soil erosion Class 9? Preventive methods of soil erosion (i) Afforestation Planting more trees reduces soil erosion. (ii) Contour Ploughing Ploughing land in furrows across the natural slope of the land helps trap water and prevent the washing away of top soil along with it. How can soil erosion prevent 5 points? Follow below ways to prevent soil erosion in your garden: - Erosion Control Matting: So the first thing is to investigate where the soil erosion might occur. - Erosion Control Ground Cover: What is an example of erosion control? Examples of erosion control methods include the following: cellular confinement systems. crop rotation. conservation tillage. What are the 4 types of erosion? The four main types of river erosion are abrasion, attrition, hydraulic action and solution. What are the two methods of soil erosion? Different Soil Erosion Causes - 1) Sheet erosion by water; - 2) Wind erosion; - 3) Rill erosion – happens with heavy rains and usually creates smalls rills over hillsides; - 4) Gully erosion – when water runoff removes soil along drainage lines. - 5) Ephemeral erosion that occurs in natural depressions. What are 3 things that cause erosion? The three main forces that cause erosion are water, wind, and ice. Water is the main cause of erosion on Earth. What are the 6 major causes of erosion? Soil Erosion: 6 Main Causes of Soil Erosion - Soil Texture: ADVERTISEMENTS: - Ground Slope: - Intensity and amount of rainfall: - Mismanaged utilization of soil resources: - Distribution of rainfall and landscape: What are some examples of erosion? Some of the most famous examples of erosion include the Grand Canyon, which was worn away over the course of tens of millions of years by the Colorado River with the help of winds whipping through the formed canyon; the Rocky Mountains in Colorado have also been the subject of intense geological study, with some
SimSpace is a SimPhysics game-based learning resource where players take on the role of space scientists, scouting the skies for Near Earth Objects (NEOs) that may pose a risk to life on Earth. The premise is that planet Earth is overdue a major impact from an asteroid or comet, and players are leading the effort to detect any NEOs heading our way. SimSpace can be played in small groups, by individuals or used in a whole class setting and is accompanied by comprehensive teacher support resources, handouts and suggested lesson ideas. Teachers' notes are provided, which include advice on using the game in lessons. This resource is downloaded as a Zip file. Please note: From 2021, Adobe has discontinued support for Flash player and as a result some interactive files may no longer be playable. As an alternative method to accessing these files a group of volunteers passionate about the preservation of internet history have created project Ruffle (https://ruffle.rs/). Ruffle is an entirely open source project that you can download and run many interactive Flash resources. For further information regarding STEM Learning’s policy for website content, please visit our terms and conditions page.
Rationale and definition: The prevalence of harmful traditional practices, particularly the practice of female genital mutilation (FGM), is measured as the percentage of women aged 15-49 who respond positively to surveys asking if they themselves have been cut. FGM refers to all procedures involving partial or total removal of the external female genitalia or other injury to the female genital organs for non-medical reasons. FGM has no known health benefits, and is on the contrary painful and traumatic, with immediate and long-term health consequences. The practice reflects deep-rooted gender inequality and is an extreme form of discrimination against women.1 By age, ethnicity, region, and wealth quintile. WHO further distinguishes by four categories of FGM.2 Comments and limitations: Many countries’ household surveys do not include the necessary questions to estimate FGM/C prevalence, and/or do not report on the prevalence among girls aged 15-19. Preliminary assessment of current data availability by Friends of the Chair: Primary data source: Potential lead agency or agencies: World Health Organization (2008). Eliminating female genital mutilation: An interagency statement – OHCHR, UNAIDS, UNDP, UNECA, UNESCO, UNFPA, UNHCR, UNICEF, UNIFEM, WHO. See WHO website on Female Genital Mutilation (FGM).
Mosquitoes belong to the same group as the true flies, diptera. As such, they have a single pair of wings. They typically have long, thin legs and a head featuring a prominent proboscis. Mosquito bodies and wings most often are covered in tiny scales. Adult sizes may range from 3 to 9 mm. Mosquitoes are best known for the habits of the adult females which often feed on blood to help generate their eggs. The lesser-known side is that mosquito adults, males and females, also feed on nectar from flowers. Their immature stages usually are located in standing, preferably stagnant, water. The larvae feed on variety of materials, depending on species. Most consume organic flotsam and tiny aquatic organisms. However, some species are predatory and will consume other mosquitoes. Adult mosquitoes prefer to be most active from dusk until dawn but can become active with sufficient cloud cover or in dark shady areas. They do not prefer to be active in the sunshine since they may desiccate and die. Mosquito treatment is usually an integrated effort involving source reduction plus the use of chemical control products when needed. Since mosquitoes develop in water, source reduction targets and eliminates water sources favorable for mosquito breeding. While source reduction is the more effective long-term approach to mosquito treatment, the mosquito treatment plan may require using chemical products to supplement source reduction. The mosquito treatment plan begins with your pest management professional conducting a thorough property inspection and identifying the kind of mosquitoes causing problems. A pest control professional should be contacted for assistance.
During school years students face a lot of written assignments. So, they have to be ready to do it quite often and not to be surprised when they are asked to complete a research paper. A research paper is a genre of journalism, which involves a summary of the conducted research and its results. Research papers are composed by students of different years of study, and graduate students, and teachers, and later they are used for the appraisal of scientific papers. The process of composing this type of paper is supervised by the teacher who also usually notifies the student of the possibility of its publication. In contrast to entertaining articles, the research paper has the following features: - It is written within the framework of the scientific style. - It does not contain slang, colloquialisms, dialects, diminutive and another vocabulary, inappropriate to the scientific style. - All the arguments should have a scientific platform. - Facts are selected accurately. - Use of theoretical information from authoritative sources. - Contains a cliche of scientific style. It has a clear structure. To write a good paper, it is necessary to comply with the standards for constructing a general plan of scientific publication and the requirements of research paper style. The main features of the scientific style: consistency, uniqueness, objectivity. There are 7 main research paper types: persuasive or argumentative, analytical, definition, compare and contrast, cause and effect, reports and an interpretive essay. Each of them has their objectives. You will find a detailed description of each type below. There are also several kinds of writing styles that can be used for composing research papers: - Conversational style is the easiest and most relaxed one. As a rule, it contains jokes that its author adds to make the writing more common and understandable for different categories of readers. The vocabulary of conversational style is rich in slang (including professional). Also such papers are characterized by expression and freestyle structure. - Scientific style is the style of specialized resources and author monographs. Its main idea is nothing personal, all the material only concerns the conducted research – highly specialized terminology, structured presentation of information. - Official is the only style in which clerical expressions look organically. This type of writing usually never contains emotionally colored words and expressions. It is characterized by a dry statement of facts. This style can be combined with other styles when designing a research paper. Main Types of Research Papers There are seven major types of papers. Each of them has different characteristics and aims. Analytical research paper It is a published research paper, which is an analysis of factors that allow finding a solution to a scientific problem. In this paper, the author presents the initial data, analyzes the interdependence of facts, the relationship of consistency. In one word, you pose a question and think of finding the answer to it. A distinctive feature of such a publication is the use of research methods generally accepted in science, careful study of the topic. It is essential to be neutral when writing the text and keep away from expressing your personal attitude. Argumentative research paper In this type of text a student should concentrate on arguments presented in the papers that are linked to the subject of the research. Students are advised to use reliable scientific sources to support their opinion. Two points of view – positive and negative – on a subject of the research should be presented here. Generally, the research paper may seem rather simple. However, much depends on the task put by the teacher. A student is usually provided with an instruction or list of steps a student has to follow when writing this paper. Definition research paper In this text a student does not need to present their point of view. Instead, they present bare facts and proofs from reliable sources. This kind of text is one of the most informative. Basically, you can say that definition only shows the opinion and findings of another researcher because it contains scientific information without presentation of personal attitude. Compare and contrast research paper Normally, in this type of paper a student makes comparison and analyses of two objects, phenomena, facts, people, styles, etc. Students are usually assigned with this task in such subjects as literature, social science, philosophy. Etc. Cause and effect research paper This written assignment is something that fresh students begin with. Its aim is to help students learn to write different kinds of papers correctly. In this text its author usually describes the results of an action or phenomenon. Such progression should be made clear to the audience. To make it more clear, in this text a student should ask and answer two questions – What and Why. Using cause and effect method it is not only possible to define the expected results, but also predict several different results that could occur under certain conditions. This is probably the easiest of all types of research papers. A report describes the field of study of a certain situation. This text would include a summary of a phenomenon, situation, event, definition of main problems, and recommendations for solving them. An interpretive essay Doing this assignment, a student has to show their knowledge obtained from a case study of a particular situation. For designing an interpretive essay one should apply theoretical knowledge and use proofs from reliable scientific sources. Classification on the target of the paper Each paper is written for a specific purpose; from this point of view they are divided into the following types: - general research papers are written to study trends and patterns in the development of science or a certain field of knowledge; - practical-oriented papers are aimed at analyzing situations in certain areas (scientific, social or cultural), such a situation is assessed, determining its causes, development prospects and ways to solve the problem; - polemical papers are written for the purpose of discussing a specific problem stated by another author (a feature of this type is the presence of an opponent). After writing a research paper, student may want to know whether they are published anywhere and who reads them. The readership is quite wide: teachers, students, graduate students, people who deal with science and those from whom the profession requires a scientific platform. Most often research papers are published in: - scientific, methodical and theoretical journals; - scientific journals issued by the university; - popular science newspapers; - collections of conference materials (student, intercollegiate, university, etc.). We hope the information provided above will help you define the type of task you are given, see its characteristic features and develop a strategy for writing.
Inside Chernobyl’s Mega Tomb: Documentary which follows the construction of a trailblazing 36,000-tonne steel structure to entomb the ruins of the nuclear power plant destroyed in the 1986 Chernobyl disaster. It films close up with the team of international engineers as they race to build the new structure before Chernobyl’s original concrete sarcophagus – the hastily built structure that covers the reactor – collapses. Built to last just 30 years, the temporary sarcophagus is now crumbling, putting the world at risk of another release of radioactive dust. Radiation levels make it impossible for workers to build the new shelter directly over the old reactor, so engineers are erecting the new megastructure – taller than the tower of Big Ben and three times heavier than the Eiffel Tower – to one side and will then face the challenge of sliding the largest object ever moved on land into place over the old reactor. Inside Chernobyl’s Mega Tomb The Chernobyl disaster was caused by a nuclear accident that occurred on Saturday 26 April 1986, at the No. 4 reactor in the Chernobyl Nuclear Power Plant, near the city of Pripyat in the north of the Ukrainian SSR. It is considered the worst nuclear disaster in history and was caused by one of only two nuclear energy accidents rated at seven—the maximum severity—on the International Nuclear Event Scale, the other being the 2011 Fukushima Daiichi nuclear disaster in Japan. The accident started during a safety test on an RBMK-type nuclear reactor, which was commonly used throughout the Soviet Union. The test was a simulation of an electrical power outage to aid the development of a safety procedure for maintaining reactor cooling water circulation until the back-up electrical generators could provide power. This gap was about one minute and had been identified as a potential safety problem that could cause the nuclear reactor core to overheat. It was hoped to prove that the residual rotational energy in a turbine generator could provide enough power to cover the gap. Three such tests had been conducted since 1982, but they had failed to provide a solution. On this fourth attempt, an unexpected 10-hour delay meant that an unprepared operating shift was on duty. During the planned decrease of reactor power in preparation for the electrical test, the power unexpectedly dropped to a near-zero level. The operators were able to only partially restore the specified test power, which put the reactor in a potentially unstable condition. This risk was not made evident in the operating instructions, so the operators proceeded with the electrical test. Upon test completion, the operators triggered a reactor shutdown, but a combination of unstable conditions and reactor design flaws caused an uncontrolled nuclear chain reaction instead. Chernobyl recovery projects The Chernobyl Trust Fund was created in 1991 by the United Nations to help victims of the Chernobyl accident. It is administered by the United Nations Office for the Coordination of Humanitarian Affairs, which also manages strategy formulation, resources mobilization, and advocacy efforts. Beginning 2002, under the United Nations Development Programme, the fund shifted its focus from emergency assistance to long-term development. The Chernobyl Shelter Fund was established in 1997 at the Denver 23rd G8 summit to finance the Shelter Implementation Plan (SIP). The plan calls for transforming the site into an ecologically safe condition by means of stabilization of the sarcophagus followed by construction of a New Safe Confinement (NSC). While the original cost estimate for the SIP was US$768 million, the 2006 estimate was $1.2 billion. The SIP is being managed by a consortium of Bechtel, Battelle, and Électricité de France, and conceptual design for the NSC consists of a movable arch, constructed away from the shelter to avoid high radiation, to be slid over the sarcophagus. The NSC was moved into position in November 2016 and is expected to be completed in late-2017. In 2003, the United Nations Development Programme launched the Chernobyl Recovery and Development Programme (CRDP) for the recovery of the affected areas. The programme was initiated in February 2002 based on the recommendations in the report on Human Consequences of the Chernobyl Nuclear Accident. The main goal of the CRDP’s activities is supporting the Government of Ukraine in mitigating long-term social, economic, and ecological consequences of the Chernobyl catastrophe. CRDP works in the four most Chernobyl-affected areas in Ukraine: Kyivska, Zhytomyrska, Chernihivska and Rivnenska.
30/07/2018 - Inflation may be present in some parts of an economy but not others. Contributions to annual inflation show how much different product groups contribute to overall inflation in a given year. The measure is a useful tool to understand where inflation is occurring in different countries, analyse trends in inflation over time, and identify volatile and stable components of inflation. It may also help explain why consumers’ perceptions of inflation sometimes differ from official figures. This Statistical Insight uses figures for Germany, Japan and the United States (US) to illustrate the usefulness of data on contributions to inflation. In addition to aggregate national Consumer Price Indices (CPIs), the OECD provides data on the contributions to annual inflation of 12 standard product groups and special aggregates. Figure 1 shows that in Germany, Japan and the US, aggregate inflation hides wide variations in price movements across product groups. In Germany, while overall prices increased by 2.2% in the year to May 2018, food and housing prices increased by 3.4% and 1.6% respectively. In the US, energy prices increased by 11.7%, and gasoline prices by 21.6%, while overall prices only increased by 2.8%. The contribution of a given product group to overall inflation depends both on the price change of the relevant product group and on its share in consumers’ expenditures. The shares vary between countries. For example, households spend around 20% of their incomes on housing in Germany and Japan, but over 30% in the US. The high share of housing costs in US households’ budgets meant that price changes in those costs contributed most to overall US inflation in the year to May 2018, even though energy prices rose much faster than housing prices. In fact, energy prices shot up everywhere, but only in Japan was energy the largest contributor to overall inflation. It may also be the case that consumers are more sensitive to movements in the prices of items they purchase frequently. For example, they may feel that inflation is high if the prices of food items are rising quickly, even though food products and non-alcoholic beverages represent less than 10% of households’ expenditures in the US, around 10% in Germany, and less than 20% in Japan. Because food and energy make volatile contributions to inflation, economists often focus on a consumption basket that excludes them in order to better understand and forecast long-term developments in inflation. The resulting numbers are called underlying, or core, inflation. Figure 2 shows that energy contributed to the bulk of inflation fluctuations between 2012 and 2018. Changes in energy prices are dominated by movements in world crude oil prices, but exchange rate fluctuations also play a role because oil prices are usually fixed in US dollars. In 2015, for example, oil prices fell but at the same time the euro and the yen depreciated against the US dollar, so that oil prices in those currencies did not fall as much as they did in dollars. This meant that falling oil prices did not reduce inflation as much in Germany and Japan as in the US. Even after excluding volatile food and energy prices, core inflation rates vary significantly across countries. Figure 2 shows that core inflation in Japan has long been lower than in Germany and the US, except for a blip in 2014-15 caused by a hike in value-added tax. Since 2016, core inflation in the US has also been consistently higher than in Germany. The major contributor to these differences is housing prices, which have risen faster in the US than in Germany, and faster in Germany than in Japan. Note that housing prices correspond to housing rentals (including imputed rentals for owner-occupied dwellings) and maintenance costs. This ignores the purchase prices of houses and apartments, which are considered as investments rather than consumption and are covered by separate price indices. Contributions to annual inflation represent the contributions to overall inflation in percentage points by different product groups. The contribution of each product group depends both on the price change in the relevant product group and its weight in households’ expenditures. The OECD calculates contributions to inflation based on national data for all countries except Austria, Chile, Finland, Mexico, the Netherlands, Poland, Sweden, and the United Kingdom, whose National Statistics Offices provide the data directly. For further information please see OECD CPI FAQs.
Beaker with Raptors - western Iran - 14th - 13th century B.C. - H-12.1 D-7 The shape and proportions and even the reinforced rim of this beaker resemble those of the vessel discussed in catalogue number 21. This beaker is also made in two pieces, the cylindrical part and the bottom. The outside of the beaker is decorated with two rows of raptors, or birds of prey: Those in the upper register face toward the viewer's right, and those in the lower row, to our left. The forms are rendered in low relief, raised only slightly from the plain ground. Each bird has a hooked beak as well as a large, round eye, that is set amid a scale-like feather pattern, which covers its head, body, and upper legs; the square tail and the tapering wings are carefully rendered with long, slender feathers. Roughly crosshatched areas indicate the featherless lower legs. The species represented remains unidentified, but the short, curving beak and prominent round eye are more parrot-like than predatory. The surface of the bottom of the beaker is filled with a compass-drawn rosette, and is decorated with an overall pattern of circular punch marks. These strutting birds find few parallels in Mesopotamian art of the later second millennium B.C., in which raptors generally are depicted in heraldically symmetrical compositions with domestic animals. Striding birds do occur, however, on a crushed-gold bowl excavated from Tomb 32 at Marlik in the southwestern region of the Caspian watershed.1 Our raptors share their stance and substantial proportions with the Marlik birds, which are more advanced technically, their heads worked in the round and turned outward toward the viewer. The Shumei beaker is rather simple and not so sophisticated and thus probably predates the Marlik bowl. Unfortunately, the fifty-three Marlik tombs, which have been dated from the fourteenth to the late eighth century B.C.,2 have as yet no secure sequence. Thus, the date of this beaker with birds is somewhat uncertain. 1. See Negahban 1983, pp. 26-27, 40-41; see Metropolitan Museum 1996, p. 31, fig. 1. 2. See Muscarella 1984, pp. 416-17.
Help kids plan out their story before they start their next creative writing activity. How to use the worksheet Follow the Super Easy Storytelling formula to choose a main character (a WHO) and plot (WHAT + WHY NOT). Kids can create their own ideas, or use these creative writing prompts for kids. Then add describing words like adjectives and better verbs to add dimension to the main character. Use the better describing words lists for ideas. Download printable worksheet PDF This PDF can be printed, or type in fillable fields in the PDF for remote learning.
- What happens to a forest after a fire? - What are positive effects of wildfires? - Why do we need forest fires? - Are wildfires getting worse? - How do forest fires affect humans? - Why don’t trees burn in wildfires? - How do forest fires help the ecosystem? - How can we prevent forest fires? - Why are forest fires bad for the environment? - What are negative effects of wildfires? - What is the biggest cause of wildfires? - Do forest fires help the environment? - Is there a difference between a wildfire and a forest fire? - What are the advantages and disadvantages of forest fires? - What are the pros and cons of forest fires? - Why are forest fires bad? - How does a forest fire start? - Are forest fires natural disasters? What happens to a forest after a fire? The forest floor is exposed to more sunlight, allowing seedlings released by the fire to sprout and grow. After fires, the charred remnants of burned trees provide habitats for insects and small wildlife, like the black-backed woodpecker and the threatened spotted owl, which make their homes in dry, hollow bark.. What are positive effects of wildfires? Fire removes low-growing underbrush, cleans the forest floor of debris, opens it up to sunlight, and nourishes the soil. Reducing this competition for nutrients allows established trees to grow stronger and healthier. History teaches us that hundreds of years ago forests had fewer, yet larger, healthier trees. Why do we need forest fires? It is vital for the forests that they do, because the flames burn leaf litter and understory plants, preventing a build-up of forest-floor vegetation. … “The plants and animals that inhabit this ecosystem are generally not well-adapted to this change in fire regime,” says Ingalsbee. Are wildfires getting worse? Climate change is making wildfires even worse. Not only is the average wildfire season three and a half months longer than it was a few decades back, but the number of annual large fires in the West has tripled — burning twice as many acres. How do forest fires affect humans? Wildfires threaten lives directly, and wildfire smoke can affect us all. They spread air pollution not only nearby, but thousands of miles away—causing breathing difficulties in even healthy individuals, not to mention children, older adults and those with heart disease, diabetes, asthma, COPD and other lung diseases. Why don’t trees burn in wildfires? Trees in fire-prone areas develop thicker bark, in part, because thick bark does not catch fire or burn easily. … The species also drops lower branches as the trees grow older, which helps prevent fire from climbing up and burning the green needles higher up the tree. How do forest fires help the ecosystem? Forest fires are an efficient, natural way for a forest to rid itself of dead or dying plant matter. And the decomposed organic matter enriches the soil with minerals that help new plants sprout up quickly. How can we prevent forest fires? Forest Fire Prevention TipsObey local laws regarding open fires, including campfires;Keep all flammable objects away from fire;Have firefighting tools nearby and handy;Carefully dispose of hot charcoal;Drown all fires;Carefully extinguish smoking materials. Why are forest fires bad for the environment? It plays a key role in shaping ecosystems by serving as an agent of renewal and change. But fire can be deadly, destroying homes, wildlife habitat and timber, and polluting the air with emissions harmful to human health. Fire also releases carbon dioxide—a key greenhouse gas—into the atmosphere. What are negative effects of wildfires? Increases in uncharacteristically large wildfires can exacerbate impacts on both ecosystems and human communities. Expanded areas of high-severity fire can impact tree regeneration, soil erosion, and water quality. What is the biggest cause of wildfires? Naturally occurring wildfires are most frequently caused by lightning. There are also volcanic, meteor, and coal seam fires, depending on the circumstance. Do forest fires help the environment? But fire is a natural phenomenon, and nature has evolved with its presence. Many ecosystems benefit from periodic fires, because they clear out dead organic material—and some plant and animal populations require the benefits fire brings to survive and reproduce. Is there a difference between a wildfire and a forest fire? In the world of the professional fire fighter, the term —wildfire“ has replaced the term —forest fire. “ —Wildfire“ is more descriptive of the wild, uncontrolled fires which occur in fields, grass and brush as well as in the forest itself. … Once started, grass and brush fires can spread to adjacent forested land. What are the advantages and disadvantages of forest fires? The disadvantages of wildfires are that they can destoy homes, lives, and millions of acres of forest. The aftermath of a fire can sometimes be worse than the fire itself. Fires burn trees and plants that prevented erosion. What are the pros and cons of forest fires? Here Are the Pros of Forest FiresForest fires help to kill disease. … It provides nutrients for new generations of growth. … It refreshes the habitat zones. … Low intensity fires don’t usually harm trees. … A forest fire sets up the potential for soil erosion to occur. … Forest fires always bring death in some form.More items…• Why are forest fires bad? Slash and burn fires are set every day to destroy large sections of forests. Of course, these forests don’t just remove trees; they kill and displace wildlife, alter water cycles and soil fertility, and endanger the lives and livelihoods of local communities. They also can rage out of control. How does a forest fire start? A fire needs three things: fuel, oxygen and heat. … Sometimes, fires occur naturally, ignited by heat from the sun or a lightning strike. However, most wildfires are because of human carelessness such as arson, campfires, discarding lit cigarettes, not burning debris properly, playing with matches or fireworks. Are forest fires natural disasters? Though they are classified by the Environmental Protection Agency as natural disasters, only 10 to 15 percent of wildfires occur on their own in nature. The other 85 to 90 percent result from human causes, including unattended camp and debris fires, discarded cigarettes, and arson.
My favorite spelling curriculum suggests moving to root word instruction after mastering phonics and the spelling rules. But how do you teach root words? I was sent WordBuild: Foundations, Level 1 by Dynamic Literacy for a review with the Schoolhouse Crew this month and I am really excited to be using it for the “next step” in our English instruction. Why Teach Root Words? Root words unlock the English language. Phonics is the building blocks of how to put words together, and root words give kids the tools they need to understand what those words mean. It makes sense that you would follow phonics and spelling with word roots to really give kids a full picture of how the English language works. With root words, you can teach you kids prefixes, suffixes, and roots, and just by learning these parts, they’ll be able to define dozens more words. While teaching your kids latin probably would be an awesome way to go about it, many of us mere mortals just don’t have the time and know-how to pull that off . . . which is where root words programs come in. WordBuild Foundations Level 1 is intended to be taught in 15 minutes a day. Foundations Level 1 covers compound words, pre-fixes and suffixes. The lessons all follow a similar pattern. First, you discuss the new prefix or suffix and talk about words that include them. There is a “prefix (or suffix) square” where kids practice forming new words. Then, the kids do a written assignment where they add the root to base words, then define them and write sample sentences. The next couple days are puzzles. There is a magic square and a word search, and then the kids do a fill in the blank activity with their new words. Bug loves to read, and has a very good vocabulary going into this program. He is able to read at a level much higher than his grade, but I wasn’t sure he was understanding everything he read. At first Bug balked at the assignments in the first book, because he knew many of the words, but he loves puzzles and games, and he came around quickly to the lessons. This program is written for a classroom instead of for one on one teaching. I would love to see the company create a homeschool line from these books, but even without homeschool specific instruction, I found it easy to use. The instructions will say something like “have the students call out sample sentences using the words” or “have the class discuss which word best fits each context.” You don’t actually need a classroom full of kids to complete any of these activities (siblings can discuss together, or parents can chat with students about their ideas) but you will need to read the instructions and adapt them to your home in this way. Because of this, I didn’t always use the teachers manual when working with Bug. Many days, I would have him get out the workbook and sit down at the kitchen table while I was doing household tasks (like cooking, or mopping the floor) and he would talk to me about the root covered in his worksheet, and then he would complete the task for the day. The teacher’s manual was very valuable for the first few lessons, but we both soon learned that the lessons follow the same repeating pattern, so I didn’t always need to consult it. This is a curriculum that I plan on leaving in our homeschool rotation for the rest of the school year. Bug really enjoys the simple puzzle based lessons, and I love that it doesn’t take a huge amount of time for me to teach. I feel like it has a good “bang for the buck” in that a small investment of time really is helping him learn a huge amount of words. Dynamic Literacy also has an online program, and if you have a full teaching schedule, it may be a wonderful fit for your family, because it takes a lot of the content covered here, and makes it fully independent for your child. Click on the banner below to read more reviews of the program shown here, and the online version of the curriculum.
What does a child's brain need for optimal development? For an organ that takes up only two percent of the body mass, the brain utilizes up to 20 percent of energy. The brain needs a complex mixture of proteins, good fats, carbohydrates, vitamins, and minerals to function and grow. Better nutrition, therefore, translates to better brain development and academic performance in a child. Some foods are extremely healthy for the brain. The brain needs them for its proper development. - Carbohydrates are the main sources of glucose, which provides energy to the body. - Starchy foods, whole-grain bread, fibers in vegetables and fruits are the best sources of complex carbohydrates. These release energy slowly and maintain optimal brain functioning. - Choosing whole-grain foods, such as whole-grain bread, pasta, or oatmeal, instead of white bread and avoiding sugary food is advisable. - They are extremely important since the brain is high in fat. Omega-3 fatty acids are found in fish (such as salmon, cod), flaxseed. Omega-6 is found in poultry, eggs, and avocado. - It’s better to avoid trans fats or hydrogenated fats, such as those found in cakes and biscuits, since they try to obstruct the functioning of the essential fatty acids. - Amino acids constitute the neurotransmitters in the brain, which help regulate moods, sleep-wake cycles, and memory. - The milk and oats contain the amino acid tryptophan, which produces serotonin. Serotonin is a neurotransmitter that controls sleep and happiness. Vitamins and minerals - They are important for the proper functioning of the body. The lack of vitamins and minerals can affect brain functions and mood. - Vitamins, such as folate and B12, are important for the proper functioning of the nervous system. It is found in leafy vegetables, such as kale, spinach, broccoli. B12 is found in eggs, fish, and whole grains. - A deficit in these vitamins can lead to memory problems, fatigue, weak muscles, mouth ulcers, and psychological problems. - Protein is vital to building the cells that make up the body. Children need protein, especially during the growth years. It also is essential for brain cell development. - High-quality protein sources include milk, eggs, meat, chicken, and fish. - Iron is important for blood cell formation and healthy brain development. Children must get enough of it every day. - The main sources of iron are red meat, tuna, salmon, eggs, legumes, dried fruits (such as raisins and dates), green leafy vegetables (such as spinach and broccoli), and whole grains (such as wheat). - Zinc deficiency can lead to slower mental development. Most children do not get the right amount of zinc. - Zinc-rich food may include meat, fish, egg, cheese, nuts, and grains. Lutein and zeaxanthin - Technically two different nutrients, both lutein, and zeaxanthin are carotenoids (plant pigments with strong antioxidant properties). These have been found to support memory, improve processing speed and efficiency and perhaps even promote academic performance, especially when consumed together. - Dark green leafy vegetables, such as spinach and kale, are great sources of lutein. Eggs, corn, kiwi, grapes, oranges, and zucchini pack plenty of both lutein and zeaxanthin as well. - This nutrient is especially important for the proper functioning of a child’s brain because it acts as a precursor for the neurotransmitter acetylcholine. - Acetylcholine is a component of phospholipids and plays a major role in the development of cell membranes. - Though it can be manufactured by the body, the quantity is not sufficient. Therefore, including food sources rich in choline in children’s diet is necessary. - Some of the rich sources of choline are eggs (egg yolk), beans, broccoli, sprouts, yogurt, and cauliflower. What are the best brain foods for children? Research shows that the following are common healthy foods for the child's brain: - Vegetables: Tomatoes, broccoli, spinach, onions, carrots, Brussels sprouts, cucumber, and kale - Fruits: Apples, bananas, grapes, strawberries, oranges, dates, and melons - Nuts and seeds: Almonds, walnuts, macadamia nuts, hazelnuts, cashew nuts, sunflower seeds, and pumpkin seeds - Legumes: Beans, lentils, peas, pulses, and chickpeas - Tubers: Sweet potatoes, potatoes, turnips, and yam - Whole grains: Whole oats, brown rice, rye, barley, corn, whole-grain bread, and pasta - Fish and seafood: Fish, such as sardines, tuna, trout, mackerel. Seafood, such as oysters, shrimps, crabs, and mussels - Poultry: Chicken, turkey, and duck - Eggs: Duck, quail, and chicken eggs - Dairy: Greek yogurt and cheese - Herbs and spices: Garlic, basil, mint, sage, rosemary, mint, nutmeg, and cinnamon - Healthy fats: Extra virgin olive oil, avocados, olives, and avocado oil - Others: Blueberries, peanuts, oatmeal, and regular water intake A variety of nutrients, vitamins, and minerals contribute to the brain development of children. So, try and provide children with balanced meals that include items from all major food groups. Consulting a pediatrician or nutritionist to chart customized meal plans for children can help ensure proper brain development.
Perennials are growing all around us—in fields, forests, and grasslands. These plants regenerate themselves each year and survive through a hardy network of roots. Unfortunately, many farmers in the industrialized world rely on monocultures of annual crops that need to be planted from season to season and can place a heavy toll on soil, water sources, and biodiversity. Sixty-nine percent of global croplands are composed of cereal, oilseed, and legumes—all crops that need to be planted annually. But organizations like The Land Institute are asking why perennial crops aren’t in modern farming. The Land Institute is aiming to reshape agriculture by creating perennial plant varieties that regenerate year after year—and have a range of environmental and nutritional benefits. The United States currently loses 1.7 billion tons of topsoil a year. According to Wes Jackson, director of The Land Institute, “the plow has destroyed more options for future generations than the sword.” Developing perennial varieties of grains, legumes, and vegetables can help save precious soil. While plant ground cover prevents soil erosion, the main difference lies in the roots. Jerry Glover, agroecologist and Senior Sustainable Agriculture Advisor to the U.S. Agency for International Development, explains, “perennial roots go deep — some as deep as 10 feet — and they will sustain the plant for many years. Way down there, the roots can capture more groundwater. Those deep, better-established roots also help cycle nutrients in the soil and make them more available to plants.” Researchers at The Land Institute are currently working to develop perennial grain varieties that create substantial yields—and more resilient food systems. Perennial crops are developed either by selecting wild perennial plants with the best crop potential (domestication), or crossing annual grains with a related perennial species (hybridization). Varieties currently being developed include kernza (a wheat-wheatgrass hybrid), sorghum, sunflower, and wheat. Sorghum, a food staple in some African countries, can account for up to 40 percent of African diets. According to Andrew Paterson, research professor and director of the University of Georgia Plant Genome Mapping Laboratory, developing a perennial sorghum crop would meet challenges of food security, climate change, and energy supply. By providing multiple harvests, the perennial crop would increase farmers’ incomes and by preventing erosion, conserving nutrition, and providing ground cover the perennial sorghum would conserve African soils. And a lot of perennials are hiding in plain sight. Common perennial vegetables include asparagus, rhubarb, sunchokes, and ramps. Legumes like pigeon peas also grow perennially. These species can be integrated into edible landscapes and gardens as low maintenance crops that can also stabilize soil and provide a natural source of fertilizer to other crops. Perennial agriculture isn’t limited to grains and a handful of vegetables—fruit orchards are the ultimate example of perennial agriculture, and food forests may be the next great idea to maximize the productivity of tree crops and improve food security. A concept widely celebrated in permaculture, food forests maximize vertical space by planting layers of trees and bush crops. For example, less than an acre of land could sustain a canopy of bananas, a middle tier of fruit and nut trees, and an understory of berry bushes, climbing beans, and vegetables. In Seattle, the newly established Beacon Food Forest covers seven acres of public land and is open to the community for foraging. Developed by permaculture students, the food forest contains walnuts, plums, apples, berries, vegetables, and herbs, with more varieties to be added in the coming years. And researchers are just discovering food forests that have fed communities from around the world for centuries. A food forest located on a Moroccan oasis is estimated to have been in production for 2,000 years, and still bears fruit from its fertile soils. With minimal inputs, the tiered forest produces dates, bananas, olives, figs, pomegranates, guavas, citrus, mulberries, tamarinds, carobs, quince, and grapes. Similar food forest systems containing tropical plants have been discovered in Vietnam. On Wednesday, Jerry Glover will be joining Food Tank to host an exclusive webinar on perennial agriculture, entitled “Farms of the Future.” The webinar will take place at 12pm EST, and will include a question period for listeners. Interested participants can register here. It’s time to rethink current agricultural practices that are harming soils and future food security. Sometimes it’s necessary to take a few steps back in order to move forward.
It’s no secret that college students struggle to get sleep. Between juggling midterms, homework, and 8:30 a.m. classes, it isn’t just a stereotype that students are sleep-deprived. At times, this sleep deprivation is merely the symptom of a busy week. It’s momentary and doesn’t last forever. However, when this sleeplessness persists, it may be a sign of a sleep disorder that makes it difficult to achieve rest, regardless of having the time to do so. Insomnia is broadly characterized by difficulty initiating and maintaining sleep, along with unfulfilling or “nonrestorative” sleep. Those diagnosed with the disorder experience these symptoms for a period of at least three months, even with ample opportunity for sleep. A recent review article showed that studies from 2000 to 2014 identified that 18.5% of college students have sleeping habits consistent with insomnia. This is an astoundingly high rate, especially when compared to that of the general adult population, which is about 7.5%. What is it about college students that make them so susceptible to sleep disorders like insomnia? The Two-Process Model: An Overview of Sleep Since the 1980s, scientists have considered sleep to work by a two-process model. The two-process model studies the interaction between the circadian rhythm and various homeostatic factors. The circadian rhythm, or “Process C,” is dictated by a structure in the hypothalamus called the suprachiasmatic nucleus (SCN). The SCN receives signals prompted by the presence or absence of light and projects signals elsewhere in the brain to promote either sleepiness or wakefulness. In this way, the SCN effectively times sleep-wake cycles in terms of daylight. Sleep cycles are also regulated by homeostatic factors through “Process S,” which refers to the body’s rising inclination toward sleep the longer an individual is awake. Disruption to the circadian rhythm acts as a primary component in insomnia development. This may be problematic for students, who have a longer circadian period than most adults: an average of 24.27 hours in teenagers and young adults versus 24.1 hours in adults. A longer period for circadian rhythm causes a natural delay in the onset of sleep, making it difficult to fall asleep at the same time every night. While a disparity of only 0.17 hours between adolescents’ and adults’ circadian periods seems meager, the weekly accumulation of this shift in bedtime can lead to an overall loss of up to three hours of sleep on a given school night. Light exposure moderates this shift in Process C. Individuals with lengthier circadian rhthyms require light exposure earlier in the morning in order to advance the circadian oscillation. While this earlier light exposure is necessary for students to maintain their sleep timing, it isn’t always a realistic option because adolescents tend to have later sleep and wake cycles. Sleep cycles geared toward waking and falling asleep later in the day can make rising early and getting the necessary light exposure challenging. This is just the tip of the iceberg when it comes to students’ susceptibility to insomnia-promoting sleep patterns. There are currently a variety of possible models to explain insomnia, from biological mechanisms concerning sleep-related hormones to psychological theories about stress. One of these psychological theories is the diathesis-stress model. The term “diathesis-stress” indicates that individuals with certain predisposing factors are more likely to develop insomnia when stress is placed on their sleep schedule. The model proposes that insomnia may be derived from three main determinants: predisposing, precipitating, and perpetuating factors. Predisposing factors refer to genetic or physiological characteristics that may put an individual at risk for developing the sleep disorder. For example, a predisposing factor might be a family history of insomnia, implying a hereditary susceptibility to the condition. Precipitating factors, on the other hand, are environmental or psychological stressors that might influence a person to develop abnormal sleeping habits. Perpetuating factors encompass any psychological, biological, or environmental circumstances that interrupt an individual’s ability to return to a normal, healthy sleep schedule. These determinants combine to explain how an individual may experience insomnia for a long period of time, even when the initial precipitating factors have been eliminated. This model identifies how common habits in the student lifestyle may contribute to the psychological factors behind student insomnia. For example, a common precipitating factor for students is their varying class schedules throughout the week. This environmental circumstance may cause students to shift their bedtimes from day to day, resulting in difficulty adjusting toward sleepiness each night. In terms of perpetuating factors, college students under academic pressure may experience enough psychological stress to keep them awake at night. This part of the student lifestyle preserves the disorder, as students may continue to struggle with falling asleep even if a precipitating factor, such as an irregular sleep schedule, were to cease. While the diathesis-stress model is useful in assessing the psychological perspective of insomnia, it lacks the neurobiological explanations that may be necessary for understanding the development and progression of the disorder in students. The Neurocognitive Model & Hyperarousal The neurocognitive model for insomnia combines the diathesis-stress model with components of neurophysiology. The model suggests that insomnia is associated with hyperactive neural activity, or hyperarousal. Hyperarousal is one of the most widely accepted conditions known to contribute to insomnia. [3, 4] This term refers to the overstimulation of both the central and autonomic nervous systems, meaning it affects the brain and spinal cord as well as certain periphery functions of the nervous system, such as metabolic rate and the fight-or-flight response. Hyperarousal is typically assessed via physiological measures of stress response, such as varying heart rate, heightened presence of the stress hormone cortisol, or the rhythm of neural activity recorded with electroencephalogram scans. The neurocognitive model also claims that thinking about having insomnia can lead to worse symptoms, as insomniacs may unintentionally develop sleep-preventing habits in response to this reflection. These habits are evident in certain maladaptive behaviors (daytime naps, alcohol consumption, etc.) that reinforce sleepless behavior, making them perpetuating factors in the progression of the disorder. College students are particularly susceptible to hyperarousal. According to an article published in the Journal of Adolescent Health, students are prone to this state of arousal because of developmental changes in the HPA axis. The HPA axis, or hypothalamic pituitary adrenal axis, is the central stress response system. The axis works via neuroendocrine feedback loops that promote the production of stress-related hormones, such as cortisol. Adolescents become more vulnerable to the effects of this stress near sleep onset as their capacity for deep sleep diminishes past childhood. This is due to adolescents’ transition toward a reduced threshold for cortisol during deep, slow-wave sleep. Being vulnerable to such effects of cortisol perpetuates these hyperarousal states among adolescents. A more recent theory called “social jetlag” may also provide insight into insomnia in students. This theory is based on the idea that individuals have their own 24-hour cycles of high to low sleep propensity. The term “social jetlag” describes the discrepancy between sleep timing for “work” days and “free” days, during which the individual’s circadian rhthym becomes out of sync with the environment. [8, 9] The theory articulates that this misalignment of alertness may augment hyperarousal at night, displacing sleep time and making it difficult for the person to initiate sleep. Social jetlag can be particularly difficult for students with late chronotypes. Late chronotypes refer to those with high sleep propensity at relatively later times than others. People with late chronotypes experience the largest adjustment from free days back to work days, as Western education schedules are typically geared toward earlier chronotypes. This adjustment results in the individual accumulating “sleep debt,” which is the sleep deficit a person may amass due to waking early after acclimating to a late bedtime or by otherwise shortening natural sleep. This phenomenon is highly relevant to college students, who tend to sleep late on weekends because of social gatherings and then attend early classes the following week. It is important to note that while this particular theory may be applicable to students, it is only one component in understanding what derails college students’ sleep toward insomnia. While it is not a direct cause, social jetlag can be thought of as a perpetuating factor of insomnia, contributing to the chronicity of the sleep disorder by further disrupting students’ circadian rhythms. It’s easy to neglect sleep with a stressful academic schedule. At times, it might seem like it’s better to prioritize precious study hours or downtime with friends. Yet, it’s the little things that can make a big difference in avoiding the pitfalls of sleep patterns associated with insomnia. This introduces the importance of sleep hygiene, which can be defined as any measure taken to improve an individual’s sleep quality or lessen behaviors that may derail sleep quality. Some examples of sleep hygiene practices include establishing quiet sleep environments and adhering to regular wake and sleep times. Missteps in these sleep hygiene behaviors are capable of predicting the severity of insomnia. These missteps involve arousal before bedtime, improper sleep scheduling, or inhabiting a disruptive sleeping environment, all of which are found to be positively associated with the severity of insomnia in college students. So, though it’s tempting to scroll through Twitter before drifting off or to stay up late to chat with roommates, it isn’t a great idea to practice these habits before bed. Changes that minimize these behaviors, big or small, may be a good first step toward avoiding student insomnia. - Diagnostic and statistical manual of mental disorders: DSM-5 . (2013). Arlington, VA: American Psychiatric Association. - Jiang, X., Zheng, X., Yang, J., Ye, C., Chen, Y., Zhang, Z., & Xiao, Z. (2015). A systematic review of studies on the prevalence of Insomnia in university students. Public Health,129 (12), 1579-1584. doi:10.1016/j.puhe.2015.07.030 - Mai, E., & Buysse, D. J. (2008). Insomnia: Prevalence, Impact, Pathogenesis, Differential Diagnosis, and Evaluation. Sleep Medicine Clinics , 3 (2), 167–174. http://doi.org/10.1016/j.jsmc.2008.02.001 - Levenson, J. C., Kay, D. B., & Buysse, D. J. (2015). The Pathophysiology of Insomnia. Chest , 147 (4), 1179–1192. http://doi.org/10.1378/chest.14-1617 - Forbes, E. E., Williamson, D. E., Ryan, N. D., Birmaher, B., Axelson, D. A., & Dahl, R. E. (2006). Peri-Sleep-Onset Cortisol Levels in Children and Adolescents with Affective Disorders. Biological Psychiatry,59 (1), 24-30. doi:10.1016/j.biopsych.2005.06.002h= - Borbély, A. A., Daan, S., Wirz-Justice, A., & Deboer, T. (2016). The two-process model of sleep regulation: A reappraisal. Journal of Sleep Research,25 (2), 131-143. doi:10.1111/jsr.12371 - Hershner, S., & Chervin, R. (2014). Causes and consequences of sleepiness among college students. Nature and Science of Sleep, 73. doi:10.2147/nss.s62907 - Smarr, B. L., & Schirmer, A. E. (2018). 3.4 million real-world learning management system logins reveal the majority of students experience social jet lag correlated with decreased performance. Scientific Reports,8 (1). doi:10.1038/s41598-018-23044-8 - Wittmann, M., Dinich, J., Merrow, M., & Roenneberg, T. (2006). Social Jetlag: Misalignment of Biological and Social Time. Chronobiology International,23 (1-2), 497-509. doi:10.1080/07420520500545979 - Gellis, L. A., Park, A., Stotsky, M. T., & Taylor, D. J. (2014). Associations Between Sleep Hygiene and Insomnia Severity in College Students: Cross-Sectional and Prospective Analyses. Behavior Therapy,45 (6), 806-816. doi:10.1016/j.beth.2014.05.002
Tasman Sea, section of the southwestern Pacific Ocean, between the southeastern coast of Australia and Tasmania on the west and New Zealand on the east; it merges with the Coral Sea to the north and encloses a body of water about 1,400 miles (2,250 km) wide and 900,000 square miles (2,300,000 square km) in area. Bass Strait (between Tasmania and Australia) leads southwest to the Indian Ocean, and Cook Strait (between North and South islands, New Zealand) leads east to the Pacific. The sea was named for the Dutch navigator Abel Tasman, who navigated it in 1642. Its New Zealand and Australian shorelines were explored in the 1770s by the British mariner Captain James Cook and others. With maximum depth exceeding 17,000 feet (5,200 m), the seafloor’s most distinctive feature is the Tasman Basin. The South Equatorial Current and trade wind drift feed the southerly moving East Australian Current, which is the dominant influence along the Australian coast. From July to December its effect is minimal, and colder waters from the south may penetrate as far north as latitude 32° S. Lord Howe Island, situated at this parallel, represents the most southerly development of a modern coral reef. In the eastern Tasman Sea, surface circulation is controlled by a stream from the western Pacific from January to June and by colder sub-Antarctic water moving north through Cook Strait from July to December. These various currents tend to make the southern Tasman Sea generally temperate in climate and the northern subtropical. Lying in the belt of westerly winds known as the “roaring forties,” the sea is noted for its storminess. The sea is crossed by shipping lanes between New Zealand and southeastern Australia and Tasmania, and its economic resources include fisheries and petroleum deposits in the Gippsland Basin at the eastern end of Bass Strait.
Summary and Keywords Phonetics is the branch of linguistics that deals with the physical realization of meaningful distinctions in spoken language. Phoneticians study the anatomy and physics of sound generation, acoustic properties of the sounds of the world’s languages, the features of the signal that listeners use to perceive the message, and the brain mechanisms involved in both production and perception. Therefore, phonetics connects most directly to phonology and psycholinguistics, but it also engages a range of disciplines that are not unique to linguistics, including acoustics, physiology, biomechanics, hearing, evolution, and many others. Early theorists assumed that phonetic implementation of phonological features was universal, but it has become clear that languages differ in their phonetic spaces for phonological elements, with systematic differences in acoustics and articulation. Such language-specific details place phonetics solidly in the domain of linguistics; any complete description of a language must include its specific phonetic realization patterns. The description of what phonetic realizations are possible in human language continues to expand as more languages are described; many of the under-documented languages are endangered, lending urgency to the phonetic study of the world’s languages. Phonetic analysis can consist of transcription, acoustic analysis, measurement of speech articulators, and perceptual tests, with recent advances in brain imaging adding detail at the level of neural control and processing. Because of its dual nature as a component of a linguistic system and a set of actions in the physical world, phonetics has connections to many other branches of linguistics, including not only phonology but syntax, semantics, sociolinguistics, and clinical linguistics as well. Speech perception has been shown to integrate information from both vision and tactile sensation, indicating an embodied system. Sign language, though primarily visual, has adopted the term “phonetics” to represent the realization component, highlighting the linguistic nature both of phonetics and of sign language. Such diversity offers many avenues for studying phonetics, but it presents challenges to forming a comprehensive account of any language’s phonetic system. 1. History and Development of Phonetics Much of phonetic structure is available to direct inspection or introspection, allowing a long tradition in phonetics (see also articles in Asher & Henderson, 1981). The first true phoneticians were the Indian grammarians of about the 8th or 7th century bce. In their works, called Pratiśãkhya, they organized the sounds of Sanskrit according to places of articulation, and they also described all the physiological gestures that were required in the articulation of each sound. Every writing system, even those not explicitly phonetic or phonological, includes elements of the phonetic systems of the languages denoted. Early Semitic writing (from Phoenician onward) primarily encoded consonants, while the Greek system added vowels explicitly. The Chinese writing system includes phonetic elements in many, if not most, characters (DeFrancis, 1989), and modern readers access phonology while reading Mandarin (Zhou & Marslen-Wilson, 1999). The Mayan orthography was based largely on syllables (Coe, 1992). All of this required some level of awareness of phonetics. Attempts to describe phonetics universally are more recent in origin, and they fall into the two domains of transcription and measurement. For transcription, the main development was the creation of the International Phonetic Alphabet (IPA) (e.g., International Phonetic Association, 1989). Initiated in 1886 as a tool for improving language teaching and, relatedly, reading (Macmahon, 2009), the IPA was modified and extended both in terms of the languages covered and the theoretical underpinnings (Ladefoged, 1990). This system is intended to provide a symbol for every distinctive sound in the world’s languages. The first versions addressed languages familiar to the European scholars primarily responsible for its development, but new sounds were added as more languages were described. The 79 consonantal and 28 vowel characters can be modified by an array of diacritics, allowing greater or lesser detail in the transcription. There are diacritics for suprasegmentals, both prosodic and tonal, as well. Additions have been made for the description of pathological speech (Duckworth, Allen, Hardcastle, & Ball, 1990). It is often the case that transcriptions for two languages using the same symbol nonetheless have perceptible differences in realization. Although additional diacritics can be used in such cases, it is more often useful to ignore such differences for most analysis purposes. Despite some limitations, the IPA continues to be a valuable tool in the analysis of languages, language use, and language disorders throughout the world. For measurement, there are two main signals to record, the acoustic and the articulatory. Although articulation is inherently more complex and difficult to capture completely, it was more accessible to early techniques than were the acoustics. Various ingenious devices were created by Abbé Rousselot (1897–1908) and E. W. Scripture (1902). Rousselot’s devices for measuring the velum (Figure 1) and the tongue (Figure 2) were not, unfortunately, terribly successful. Pliny Earl Goddard (1905) used more successful devices and was ambitious enough to take his equipment into the field to record dynamic air pressure and static palatographs of such languages as Hupa [ISO 639-3 code hup] and Chipewyan [ISO 639-3 code chp]. Despite these early successes, relatively little physiological work was done until the second half of the 20th century. Technological advances have made it possible to examine muscle activity, airflow, tongue-palate contact, and location and movement of the tongue and other articulators via electromagnetic articulometry, ultrasound, and real-time magnetic resonance imaging (see Huffman, 2016). These measurements have advanced our theories of speech production and have addressed both phonetic and phonological issues. Acoustic recordings became possible with the Edison disks, but the ability to measure and analyze these recordings was much longer in coming. Some aspects of the signal could be somewhat reasonably rendered via flame recordings, in which photographs were taken of flames flickering in response to various frequencies (König, 1873). These records were of limited value, because of the limitations of the waveform itself and the difficulties of the recordings, including the time and expense of making them. Further, the ability to see the spectral properties in detail was greatly enhanced by the declassification (after World War II) of the spectrograph (Koenig, Dunn, & Lacy, 1946; Potter, Kopp, & Green, 1947). New methods of analysis are constantly being explored, with greater accuracy and refinement of data categories being the result. Sound is the most obvious carrier of language (and is etymologically embedded in “phonetics”), and the recognition that vision also plays a role in understanding speech came relatively late. Not only do those with typical hearing use vision when confronted with noisy speech (Sumby & Pollack, 1954), they can even be misled by vision with speech that is clearly audible (McGurk & MacDonald, 1976). Although the lips and jaw are the most salient carriers of speech information, areas of the face outside the lip region co-vary with speech segments (Yehia, Kuratate, & Vatikiotis-Bateson, 2002). Audiovisual integration continues as an active area of research in phonetics. Sign language, a modality largely devoid of sound, has also adopted the term “phonetics” to describe the system of realization of the message (Goldin-Meadow & Brentari, 2017; Goldstein, Whalen, & Best, 2006). Similarities between reduction of speech articulators and American Sign Language (ASL) indicate that both systems allow for (indeed, may require) reduction in articulation when content is relatively predictable (Tyrone & Mauk, 2010). There is evidence that unrelated sign languages use the same realization of telicity, that is, whether an action has an inherent (“telic”) endpoint (e.g., “decide”) or not (“atelic”, e.g., “think”) (Strickland et al., 2015). Phonetic constraints, such as maximum contrastiveness of hand shapes, has been explored in an emerging sign language, Al-Sayyid Bedouin Sign Language (Sandler, Aronoff, Meir, & Padden, 2011). As further studies are completed, we can expect to see more insights into the aspects of language realization that are shared across modalities, and to be challenged by those that differ. 2. Phonetics in Relation to Phonology Just as phonetics describes the realization of words in a language, so phonology describes the patterns of elements that make meaningful distinctions in a language. The relation between the two has been, and continues to be, a topic for theoretical debate (e.g., Gouskova, Zsiga, & Boyer, 2011; Keating, 1988; Romero & Riera, 2015). Positions range from a strict separation in which the phonology completes its operations before the phonetics becomes involved (e.g., Chomsky & Halle, 1968) to a complete dissolution of the distinction (e.g., Flemming, 2001; Ohala, 1990). Many intermediate positions are proposed as well. Regardless of the degree of separation, the early assumption that phonetic implementation was merely physical and universal (e.g., Chomsky & Halle, 1968; Halliday, 1961), which may invoke the mind-body problem (e.g., Fodor, 1981), has proven to be inadequate. Keating (1985) examined three phonetic effects—intrinsic vowel duration, extrinsic vowel duration, and voicing timing—and found that they were neither universal nor physiologically necessary. Further examples of language- and dialect-specific effects have been found in the fine-grained detail in Voice Onset Time (Cho & Ladefoged, 1999), realization of focus (Peters, Hanssen, & Gussenhoven, 2014), and even the positions of speech articulators before speaking (Gick, Wilson, Koch, & Cook, 2004). Whatever the interface between phonetics and phonology may be, there exist language-specific phonetic patterns, thus ensuring the place of phonetics within linguistics proper. The overwhelming evidence for “language-specific phonetics” has prompted a reconsideration of the second traditionally assumed distinction between phonology and phonetics: phonetics is continuous, phonology is discrete. This issue has been raised relatively less often in discussions of the phonetics-phonology interface. An approach to phonology that has addressed the need to reconsider the purely representational phonological elements in a logical compliance with their physical realization is Articulatory Phonology, whose elements have been claimed to be available for “public” (phonetic) use and yet categorical for making linguistic distinctions (Goldstein & Fowler, 2003). Iskarous (2017) shows how dynamical systems analysis unites discrete phonological contrast and continuous phonetic movement into one non-dualistic description. Gafos and his colleagues provide the formal mechanisms that use such a system to address a range of phonological processes (Gafos, 2002; Gafos & Beňuš, 2006; Gafos, Roeser, Sotiropoulou, Hoole, & Zeroual, 2019). Relating categorical, meaningful distinctions to continuous physical realizations will continue to be developed and, one hopes, ultimately be resolved. 3. Phonetics in Relation to Other Aspects of Language Phonetic research has had far-reaching effects, many of which are outlined in individual articles in this encyclopedia. Here are four issues of particular interest. Perception: Until the advent of an easily manipulated acoustic signal, it was very difficult to determine which aspects of the speech signal are taken into account perceptually. The Pattern Playback (Cooper, 1953) was an early machine that allowed the synthesis of speech from acoustic parameters. The resulting acoustic patterns did not sound completely natural, but they elicited speech percepts that allowed discoveries to be made, ones that have been replicated in many other studies (cf. Shankweiler & Fowler, 2015). These findings have led to a sizable range of research in linguistics and psychology (see Beddor, 2017). Many studies of brain function also take these results as a starting point. Acquisition: Learning to speak is natural for neurologically typical infants, with no formal instruction necessary for the process. Just how this process takes place depends on phonetic findings, so that the acoustic output of early productions can be compared with the target values in the adult language. Whether or not linguistic categories are “innate,” the development of links between what the learner hears and what s/he speaks is a matter of ongoing debate that would not be possible without phonetic analysis. Much if not most of the world’s population is bi- or multilingual, and the phonetic effects of second language learning have received a great deal of attention (Flege, 2003). The phonetic character of a first language (L1) usually has a great influence on the production and perception of a second one (L2). The effects are smaller when L2 is acquired earlier in life than later, and there is a great deal of individual variability. Degree of L2 accent has been shown to be amenable to improvement via biofeedback (d’Apolito, Sisinni, Grimaldi, & Gili Fivela, 2017; Suemitsu, Dang, Ito, & Tiede, 2015). Sociolinguistics: Phonetic variation within a language is a strong indicator of community membership (Campbell-Kibler, 2010; Foulkes & Docherty, 2006). From the biblical story of the shibboleth to modern everyday experience, speech indicates origin. Perception of an accent can thus lead to stereotypical judgements based on origin, such as assigning less intelligence to speakers who use “-in” rather than “-ing” (Campbell-Kibler, 2007). Accents can work in two directions at once, as when an African American dialect is simultaneously recognized as disfavored by mainstream society yet valued as a marker of social identity (Wolfram & Schilling, 2015, p. 238). The level of detail that is available to and used by speakers and listeners is massive, requiring large studies with many variables. This makes sociophonetics both exciting and challenging (Hay & Drager, 2007). Speech therapy: Not every instance of language acquisition is a smooth one, and some individuals face challenges in speaking their language. The tools that are developed in phonetics help with assessment of the differences and, in some cases, provide a means of remediation. One particularly exciting development that depends on articulation rather than acoustics is the use of ultrasound biofeedback (using images of the speaker’s tongue) to improve production (e.g., Bernhardt, Gick, Bacsfalvi, & Ashdown, 2003; Preston et al., 2017). Speech technology: Speech synthesis was an early goal of phonetic research (e.g., Holmes, Mattingly, & Shearme, 1964), and research continues to the present. Automatic speech recognition made use of phonetic results, though modern systems rely on more global treatments of the acoustic signal (e.g., Furui, Deng, Gales, Ney, & Tokuda, 2012). Man-machine interactions have benefited greatly from phonetic findings, helping to shape the modern world. Further advances may begin, once again, to make less use of machine learning and more of phonetic knowledge. 4. Future Directions Phonetics as a field of study began with the exceptional discriminative power of the human ear, but recent developments have been increasingly tied to technology. As our ability to record and analyze speech increases, our use of larger and larger data sets increases as well. Many of those data sets consist of acoustic recordings, which are excellent but incomplete records for phonetic analysis. Greater attention is being paid to variability in the signal, both in terms of covariation (Chodroff & Wilson, 2017; Kawahara, 2017) and intrinsic lack of consistency (Tilsen, 2015; Whalen, Chen, Tiede, & Nam, 2018). Assessing variability depends on the accuracy of individual measurements, and our current automatic formant analyses are known to be inaccurate (Shadle, Nam, & Whalen, 2016). Future developments in this domain are needed, but current studies that use current techniques must be appropriately limited in their interpretation. Large data sets are rare for physiological data, though there are some exceptions (Narayanan et al., 2014; Tiede, 2017; Westbury, 1994). Quantification of articulator movement is easier than in the past, but it remains challenging in both collection and analysis. Mathematical tools for image processing and pattern detection are being adapted to the problem, and the future understanding of speech production should be enhanced. Although many techniques are too demanding for some populations, ultrasound has been found to allow investigations of young children (Noiray, Abakarova, Rubertus, Krüger, & Tiede, 2018), speakers in remote areas (Gick, Bird, & Wilson, 2005), and disordered populations (Preston et al., 2017). Thus the amount of data and the range of populations that can be measured can be expected to increase significantly in the coming years. Understanding the brain mechanisms that underlie the phonetic effects being studied by other means will continue to expand. Improvements in the specificity of brain imaging techniques will allow narrower questions to be addressed. Techniques such as electrocorticographic (ECoG) signals (Hill et al., 2012), functional near-infrared spectroscopy (Yücel, Selb, Huppert, Franceschini, & Boas, 2017), and the combination of multiple modalities will allow more direct assessments of phonetic control in production and effects in perception. As with other levels of linguistic structure, theories will be both challenged and enhanced by evidence of brain activation in response to language. Addressing more data allows a deeper investigation of the speech process, and technological advances will continue to play a major role. The study does, ultimately, return to the human perception and production ability, as each newly born speaker/hearer begins to acquire speech and the language it makes possible. Fant, G. (1960). Acoustic theory of speech production. The Hague, The Netherlands: Mouton.Find this resource: Hardcastle, W. J., & Hewlett, N. (Eds.). (1999). Coarticulation models in recent speech production theories. Cambridge, UK: Cambridge University Press.Find this resource: Ladefoged, P. (2001). A course in phonetics (4th ed.). Fort Worth, TX: Harcourt College Publishers.Find this resource: Ladefoged, P., & Maddieson, I. (1996). The sounds of the world’s languages. Oxford, UK: Blackwell.Find this resource: Liberman, A. M. (1996). Speech: A special code. Cambridge, MA: MIT Press.Find this resource: Lisker, L., & Abramson, A. S. (1964). A cross-language study of voicing in initial stops: Acoustical measurements. Word, 20, 384–422. doi:10.1080/00437956.1964.11659830Find this resource: Ohala, J. J. (1981). The listener as a source of sound change. In M. F. Miller (Ed.), Papers from the parasession on language behavior (pp. 178–203). Chicago, IL: Chicago Linguistic Association.Find this resource: Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175–184.Find this resource: Stevens, K. N. (1998). Acoustic phonetics. Cambridge, MA: MIT Press.Find this resource: Asher, R. E., & Henderson, J. A. (Eds.). (1981). Towards a history of phonetics. Edinburgh, UK: Edinburgh University Press.Find this resource: Beddor, P. S. (2017). Speech perception in phonetics. In M. Aronoff (Ed.), Oxford research encyclopedia of linguistics. Oxford University Press. doi:10.1093/acrefore/9780199384655.013.62Find this resource: Bernhardt, B. M., Gick, B., Bacsfalvi, P., & Ashdown, J. (2003). Speech habilitation of hard of hearing adolescents using electropalatography and ultrasound as evaluated by trained listeners. Clinical Linguistics and Phonetics, 17, 199–216.Find this resource: Campbell-Kibler, K. (2007). Accent, (ing), and the social logic of listener perceptions. American Speech, 82(1), 32–64. doi:10.1215/00031283-2007-002Find this resource: Campbell-Kibler, K. (2010). Sociolinguistics and perception. Language and Linguistics Compass, 4(6), 377–389. doi:10.1111/j.1749-818X.2010.00201.xFind this resource: Cho, T., & Ladefoged, P. (1999). Variation and universals in VOT: Evidence from 18 languages. Journal of Phonetics, 27, 207–229.Find this resource: Chodroff, E., & Wilson, C. (2017). Structure in talker-specific phonetic realization: Covariation of stop consonant VOT in American English. Journal of Phonetics, 61, 30–47. doi:10.1016/j.wocn.2017.01.001Find this resource: Chomsky, N., & Halle, M. (1968). The sound pattern of English. New York, NY: Harper and Row.Find this resource: Coe, M. D. (1992). Breaking the Maya code. London, UK: Thames and Hudson.Find this resource: Cooper, F. S. (1953). Some instrumental aids to research on speech. In A. A. Hill (Ed.), Fourth Annual Round Table Meeting on Linguistics and Language Teaching (pp. 46–53). Washington, DC: Georgetown University.Find this resource: d’Apolito, I. S., Sisinni, B., Grimaldi, M., & Gili Fivela, B. (2017). Perceptual and ultrasound articulatory training effects on English L2 vowels production by Italian learners. International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering, 11(8), 2159–2167.Find this resource: DeFrancis, J. (1989). Visible speech: The diverse oneness of writing systems. Honolulu: University of Hawa’ii Press.Find this resource: Duckworth, M., Allen, G., Hardcastle, W., & Ball, M. (1990). Extensions to the International Phonetic Alphabet for the transcription of atypical speech. Clinical Linguistics and Phonetics, 4, 273–280. doi:10.3109/02699209008985489Find this resource: Flege, J. E. (2003). Assessing constraints on second-language segmental production and perception. In N. O. Schiller & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 319–355). Berlin, Germany: Mouton de Gruyter.Find this resource: Flemming, E. (2001). Scalar and categorical phenomena in a unified model of phonetics and phonology. Phonology, 18, 7–44.Find this resource: Fodor, J. A. (1981). The mind-body problem. Scientific American, 244, 114–123.Find this resource: Foulkes, P., & Docherty, G. (2006). The social life of phonetics and phonology. Journal of Phonetics, 34, 409–438. doi:10.1016/j.wocn.2005.08.002Find this resource: Furui, S., Deng, L., Gales, M., Ney, H., & Tokuda, K. (2012). Fundamental technologies in modern speech recognition. IEEE Signal Processing Magazine, 29(6), 16–17.Find this resource: Gafos, A. I. (2002). A grammar of gestural coordination. Natural Language and Linguistic Theory, 20, 269–337.Find this resource: Gafos, A. I., & Beňuš, Š. (2006). Dynamics of phonological cognition. Cognitive Science, 30, 905–943. doi:10.1207/s15516709cog0000_80Find this resource: Gafos, A. I., Roeser, J., Sotiropoulou, S., Hoole, P., & Zeroual, C. (2019). Structure in mind, structure in vocal tract. Natural Language and Linguistic Theory. doi:10.1007/s11049-019-09445-yFind this resource: Gick, B., Bird, S., & Wilson, I. (2005). Techniques for field application of lingual ultrasound imaging. Clinical Linguistics and Phonetics, 19, 503–514.Find this resource: Gick, B., Wilson, I., Koch, K., & Cook, C. (2004). Language-specific articulatory settings: Evidence from inter-utterance rest position. Phonetica, 61, 220–233.Find this resource: Goddard, P. E. (1905). Mechanical aids to the study and recording of language. American Anthropologist, 7, 613–619. doi:10.1525/aa.1905.7.4.02a00050Find this resource: Goldin-Meadow, S., & Brentari, D. (2017). Gesture, sign, and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences, 40, e46. doi:10.1017/S0140525X15001247Find this resource: Goldstein, L. M., & Fowler, C. A. (2003). Articulatory phonology: A phonology for public language use. In N. Schiller & A. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 159–207). Berlin, Germany: Mouton de Gruyter.Find this resource: Goldstein, L. M., Whalen, D. H., & Best, C. T. (Eds.). (2006). Papers in laboratory phonology 8. Berlin, Germany: Mouton de Gruyter.Find this resource: Gouskova, M., Zsiga, E., & Boyer, O. T. (2011). Grounded constraints and the consonants of Setswana. Lingua, 121(15), 2120–2152. doi:10.1016/j.lingua.2011.09.003Find this resource: Halliday, M. A. K. (1961). Categories of the theory of grammar. Word, 17, 241–292. doi:10.1080/00437956.1961.11659756Find this resource: Hay, J., & Drager, K. (2007). Sociophonetics. Annual Review of Anthropology, 36(1), 89–103. doi:10.1146/annurev.anthro.34.081804.120633Find this resource: Hill, N. J., Gupta, D., Brunner, P., Gunduz, A., Adamo, M. A., Ritaccio, A., & Schalk, G. (2012). Recording human electrocorticographic (ECoG) signals for neuroscientific research and real-time functional cortical mapping. Journal of Visualized Experiments (64), 3993. doi:10.3791/3993Find this resource: Holmes, J. N., Mattingly, I. G., & Shearme, J. N. (1964). Speech synthesis by rule. Language and Speech, 7(3), 127–143.Find this resource: Huffman, M. K. (2016). Articulatory phonetics. In M. Aronoff (Ed.), Oxford Research Encyclopedia of Linguistics. Oxford University Press.Find this resource: International Phonetic Association. (1989). Report on the 1989 Kiel Convention. Journal of the International Phonetic Association, 19, 67–80.Find this resource: Iskarous, K. (2017). The relation between the continuous and the discrete: A note on the first principles of speech dynamics. Journal of Phonetics, 64, 8–20. doi:10.1016/j.wocn.2017.05.003Find this resource: Kawahara, S. (2017). Durational compensation within a CV mora in spontaneous Japanese: Evidence from the Corpus of Spontaneous Japanese. Journal of the Acoustical Society of America, 142, EL143–EL149. doi:10.1121/1.4994674Find this resource: Keating, P. A. (1985). Universal phonetics and the organization of grammars. In V. A. Fromkin (Ed.), Phonetic linguistics: Essays in honor of Peter Ladefoged (pp. 115–132). New York, NY: Academic Press.Find this resource: Keating, P. A. (1988). The phonology-phonetics interface. In F. Newmeyer (Ed.), Linguistics: The Cambridge survey: Vol. 1. Grammatical theory (pp. 281–302). Cambridge, UK: Cambridge University Press.Find this resource: Koenig, W., Dunn, H. K., & Lacy, L. Y. (1946). The sound spectrograph. Journal of the Acoustical Society of America, 18, 19–49.Find this resource: König, R. (1873). I. On manometric flames. London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 45(297), 1–18.Find this resource: Ladefoged, P. (1990). The Revised International Phonetic Alphabet. Language, 66, 550–552. doi:10.2307/414611Find this resource: Macmahon, M. K. C. (2009). The International Phonetic Association: The first 100 years. Journal of the International Phonetic Association, 16, 30–38. doi:10.1017/S002510030000308XFind this resource: McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746–748.Find this resource: Narayanan, S., Toutios, A., Ramanarayanan, V., Lammert, A., Kim, J., Lee, S., . . . Proctor, M. (2014). Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC). Journal of the Acoustical Society of America, 136, 1307–1311. doi:10.1121/1.4890284Find this resource: Noiray, A., Abakarova, D., Rubertus, E., Krüger, S., & Tiede, M. K. (2018). How do children organize their speech in the first years of life? Insight from ultrasound imaging. Journal of Speech, Language, and Hearing Research, 61, 1355–1368.Find this resource: Ohala, J. J. (1990). There is no interface between phonology and phonetics: A personal view. Journal of Phonetics, 18, 153–172.Find this resource: Peters, J., Hanssen, J., & Gussenhoven, C. (2014). The phonetic realization of focus in West Frisian, Low Saxon, High German, and three varieties of Dutch. Journal of Phonetics, 46, 185–209. doi:10.1016/j.wocn.2014.07.004Find this resource: Potter, R. K., Kopp, G. A., & Green, H. G. (1947). Visible speech. New York, NY: Van Nostrand.Find this resource: Preston, J. L., McAllister Byun, T., Boyce, S. E., Hamilton, S., Tiede, M. K., Phillips, E., . . . Whalen, D. H. (2017). Ultrasound images of the tongue: A tutorial for assessment and remediation of speech sound errors. Journal of Visualized Experiments, 119, e55123. doi:10.3791/55123Find this resource: Romero, J., & Riera, M. (Eds.). (2015). The Phonetics–Phonology Interface: Representations and methodologies. Amsterdam, The Netherlands: John Benjamins.Find this resource: Rousselot, P.‐J. (1897–1908). Principes de phonétique expérimentale. Paris, France: H. Welter.Find this resource: Sandler, W., Aronoff, M., Meir, I., & Padden, C. (2011). The gradual emergence of phonological form in a new language. Natural Language and Linguistic Theory, 29, 503–543. doi:10.1007/s11049-011-9128-2Find this resource: Scripture, E. W. (1902). The elements of experimental phonetics. New York, NY: Charles Scribner’s Sons.Find this resource: Shadle, C. H., Nam, H., & Whalen, D. H. (2016). Comparing measurement errors for formants in synthetic and natural vowels. Journal of the Acoustical Society of America, 139, 713–727.Find this resource: Shankweiler, D., & Fowler, C. A. (2015). Seeking a reading machine for the blind and discovering the speech code. History of Psychology, 18, 78–99.Find this resource: Strickland, B., Geraci, C., Chemla, E., Schlenker, P., Kelepir, M., & Pfau, R. (2015). Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences, 112(19), 5968–5973. doi:10.1073/pnas.1423080112Find this resource: Suemitsu, A., Dang, J., Ito, T., & Tiede, M. K. (2015). A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning. Journal of the Acoustical Society of America, 138, EL382–EL387. doi:10.1121/1.4931827Find this resource: Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26, 212–215.Find this resource: Tiede, M. K. (2017). Haskins_IEEE_Rate_Comparison_DB.Find this resource: Tilsen, S. (2015). Structured nonstationarity in articulatory timing. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 78, pp. 1–5). Glasgow, UK: University of Glasgow.Find this resource: Tyrone, M. E., & Mauk, C. E. (2010). Sign lowering and phonetic reduction in American Sign Language. Journal of Phonetics, 38, 317–328. doi:10.1016/j.wocn.2010.02.003Find this resource: Westbury, J. R. (1994). X-ray microbeam speech production database user’s handbook. Madison: Waisman Center, University of Wisconsin.Find this resource: Whalen, D. H., Chen, W.‐R., Tiede, M. K., & Nam, H. (2018). Variability of articulator positions and formants across nine English vowels. Journal of Phonetics, 68, 1–14.Find this resource: Wolfram, W., & Schilling, N. (2015). American English: Dialects and variation. Malden, MA: John Wiley & Sons.Find this resource: Yehia, H. C., Kuratate, T., & Vatikiotis-Bateson, E. S. (2002). Linking facial animation, head motion and speech acoustics. Journal of Phonetics, 30, 555–568. doi:10.1006/jpho.2002.0165Find this resource: Yücel, M. A., Selb, J. J., Huppert, T. J., Franceschini, M. A., & Boas, D. A. (2017). Functional Near Infrared Spectroscopy: Enabling routine functional brain imaging. Current Opinion in Biomedical Engineering, 4, 78–86. doi:10.1016/j.cobme.2017.09.011Find this resource: Zhou, X., & Marslen-Wilson, W. D. (1999). Phonology, orthography, and semantic activation in reading Chinese. Journal of Memory and Language, 41, 579–606.Find this resource:
Math and Literacy centers for kindergarten! These May center activities has EDITABLE RESPONSE PAGES. In this way, you and target the exact skills you want your classroom OR individual student to focus on during their center time. HERE ARE THE SKILLS COVERED IN THIS MONTH: - beginning digraphs roll and write literacy center activity - ending digraphs roll and write literacy center activity - long vowels roll and write literacy center activity x 2 versions - sentence writing roll and write literacy center activity - EDITABLE sight word roll and write literacy center activity - numbers: write the number or numerical order option roll and write math center activity - ten frames: write the number, count on, or subtraction option roll and write math center activity - tally marks: write the number, addition, or more or less option roll and write math center activity - addition or making 10 option roll and color math center activity TOTALLY CUSTOMIZE THESE CENTER ACTIVITIES! YOU to decide the skills you want your students to focus on. Editable response pages and interchangeable cards allow you to differentiate and focus your students’ practice work. YOU can select which letters, sounds, words, or numeracy skills you want your student to focus on during math centers or literary centers. Many provide a variety of levels for differentiation. These kindergarten center activities are intentionally predictable. No surprises! In this way, you will free up instructional time so you can meet with your small groups. Meanwhile, your students will be working in a meaningful way during center time. **SAVE 30% BY PURCHASING THE BUNDLE** NOTE: YOU WILL NEED TO PURCHASE THE CUBES. THEY ARE NOT INCLUDED IN THIS DOWNLOAD. You can purchase the cubes at teacher stores or you can buy them on Amazon. If you love these cubes, you will love the Roll a Story cubes found in The Kinderhearted Classroom’s store. Click HERE to see them!
Binding Data to DOM Elements In this example, you will use D3.js to bind data to DOM elements of a basic webpage. Start with a basic HTML webpage: 1var theData = [ 1, 2, 3 ] 2 3var p = d3.select("body").selectAll("p") 4 .data(theData) 5 .enter() 6 .append("p") 7 .text("hello "); This will give you the following: Congratulations - you have bound some data to some DOM elements using D3.js! D3.js SelectAll Method The D3.js SelectAll method uses CSS3 selectors to grab DOM elements. Unlike the Select method (where the first element is selected), the SelectAll method selects all the elements that match the specific selector string. But wait! The basic HTML page doesn't contain any <p> yet. What is it actually doing? What it is doing is selecting all of the <p> available on the page. Which in this case is none. So it returns an empty selection. Later use of .data(theData) and .enter( ) will allow us to bind the data to the empty selection. D3.js Data Operator The data operator joins an array of data (which can be numbers, objects or other arrays) with the current selection. In this example, there is no key provided, so each element of theData array of data is assigned to each element of the current selection. The first element 1 is assigned to the first <p> element, the second to the second, so on and so forth. But wait! The basic page doesn't contain any <p> yet. What is it actually doing? D3.js Virtual Selections (Thinking with Joins) The D3.js Data Operator returns three virtual selections rather than just the regular one like other methods. The three virtual selections are enter, update and exit. The enter selection contains placeholders for any missing elements. The update selection contains existing elements, bound to data. Any remaining elements end up in the exit selection for removal. Since our selection from The virtual enter selection now contains placeholders for our <p> elements. We will come back to the power of the virtual selections enter, update, and exit in later sections. For now we will concentrate on the enter virtual selection. To learn more, please visit the classic article by Mike Bostock "Thinking with Joins". D3.js Enter Method The D3.js Enter Method returns the virtual enter selection from the Data Operator. This method only works on the Data Operator because the Data Operator is the only one that returns three virtual selections. In this case 1var p = d3.select("body").selectAll("p") 2 .data(theData) 3 .enter() This will return a reference to the placeholder elements (nodes) for each data element that did not have a corresponding existing DOM Element. Once we have this reference we can then operate on this selection. However, it is important to note that this reference only allows chaining of append, insert and select operators to be used on it. After these operators have been chained the the .enter() selection, we can treat the selection just like any other selection to modify the content. D3.js Append Operator Revisited Looking at the code again: 1var p = d3.select("body").selectAll("p") 2 .data(theData) 3 .enter() 4 .append("p") We .append("p") to the .enter() selection. For each placeholder element created in the previous step, a p element is inserted. Because we had three data points in our data array and no <p> elements on the webpage, the .append("p") creates and adds three HTML paragraph elements. In the example, after the append operator has operated on the selection, it will return a selection of three HTML paragraph elements. D3.js Text Operator If we wrote the code so that it was missing the last text operator as such: 1var theData = [ 1, 2, 3 ] 2 3var p = d3.select("body").selectAll("p") 4 .data(theData) 5 .enter() 6 .append("p"); This is what we would see: Notice that none of the paragraphs contain any text versus the previous picture: The Text Operator sets the textContent of the node to the specified value on all selected elements. In this example, .text("hello "), the value is "hello ". Since the selection was three <p> elements, each element gets a "hello " inserted into it. Where did the Data go? 1var theData = [ 1, 2, 3 ] and some how we end up with three paragraphs that say Hello. What happened to our numbers 1, 2 and 3? D3.js Data Operator Revisited When you hit return and click through the down arrows to see the properties of "body", you see something like this: Beneath the line that reads 0: <body>, you can see the properties of this HTML body element. Our data appears as a property named __data__: When data is assigned to an element, it is stored in the property __data__. This makes the data "sticky" as the __data__ property is available on re-selection. This is what we mean when we talk about Binding Data to Dom Elements. Basic Example Revisited Going back to our basic example at the top of the page, we can now see where the data was bounded to by using the console.log( ): 1var theData = [ 1, 2, 3 ] 2 3var p = d3.select("body").selectAll("p") 4 .data(theData) 5 .enter() 6 .append("p") 7 .text("hello "); 8 9console.log(p); Gives us this: From this you can see the three paragraph elements that were appended. You can also see that the last (third) paragraph element has the __data__ property with a value of 3. Which came from our data set!
Sustainability in terms of the environment implies a natural resource balance. The core principle of ‘sustainability’ is described as a ‘meeting the needs of the current generation without compromising the ability of future generation to meet their needs’, indicating a precautionary approach to those activities that effect the environment to prevent irreparable damage. However, construction is not an environmentally friendly process by nature, and it has become one essential part of sustainable issues. Resource utilisation, the material manufacturing process, material transportation, and disposal of waste materials can potentially cause environmental problems. Greenhouse gas emissions, energy consumption, and depletion of resources are important factors that influence our built-environment. Sustainable construction must meet the goals for reducing energy consumption and greenhouse gas emissions, and using more renewable materials. Since the 1990s, the issues of sustainable buildings and sustainable building materials have been paid much attention to by scholars and governments around the world. Several green policies for green buildings and green materials have been proposed to address these issues. Within these policies, wood building structures are highly encouraged to be applied as a green building structures, as wood is regarded as an ecological green material, and it requires minimum treatment as well as minimum consumption of energy during the life cycle from the production to final disposal. There are several benefits of using wood material, one of which is carbon storage. Wood material contains 50 percent carbon during its growth, absorbing carbon dioxide in the air. Therefore, the more wood materials are used, the more carbon is stored, thus reducing the global warming effect. In addition, the effect of wood biomass substitution can decrease greenhouse gas emissions. It had been suggested that ‘exchanging coal for biomass wastes and residues is one of the lowest-cost, nearest-term options for reducing fossil carbon dioxide emissions at existing power plants’. Research into fuel substitution in Sweden found that ‘the highest cross-price elastic ties can be found between wood fuel and non-gaseous fossil fuels (oil and coal), reflecting a relatively large substitution possibility’. In consequence, the use of wood material has a great advantage for reducing environmental problems. However, wood resources are not sufficient in many land-limited countries or regions like Japan, South Korea, Taiwan, and so on. The domestic raw materials and products of wood can hardly meet the local demands from the construction industry, which requires a massive volume of wood materials. Therefore, the building construction industry in these countries or regions seeks to import wood from overseas. For instance, it is reported that approximately eight million to ten million cubic metres of wood are consumed in Taiwan every year, while the domestic supply amount of wood in Taiwan is only around fifty thousand cubic metres. The degree of self-sufficiency of wood in Taiwan is less than one percent, and almost 99 percent of wood used in the construction industry in Taiwan is imported from foreign regions, such as the US, Canada, Sweden, New Zealand, Australia, Brazil, China, and Malaysia. Hence, the identification of sustainable wood importing sources for these counties or regions plays a critical role in pursuing the sustainability of wood used in building construction. Significant research has been conducted to identify sustainable building material resources in the construction industry worldwide. For example, Koch analysed national data from the 1970s that stated that the environmental impact of wood structures was lower due to the use of wood in the US; in New Zealand, Buchanan and Honey indicated that the energy use and carbon dioxide emissions of wood structures are both lower than concrete structures. Borjesson and Gustavsson reported similar findings in Sweden by evaluating the effects of land use and end of life changes of materials. Upton et al. further suggested that greenhouse gas emissions associated with wood-based houses were 20–50 percent lower than those associated with comparable houses employing steel-based building systems. Peterson and Solberg indicated the sustainability of wood used in construction depends on how material waste has been managed and how forest carbon flows have been considered. Architects of Guardigli, Monari, and Bragadin developed a design model of mid-size green buildings of wood and concrete structures, finding that with European LCA database, wood design is much more environmentally friendly. Besides, wood is traditionally regarded for low-rise building construction, but as for recent research related to wood construction, Skullestad, Bohne and Lohne investigated greenhouse gas emissions of high-rise timber buildings from three to 21 storeys compared with reinforced concrete structures. The results showed that carbon dioxide emissions reduction obtained by substituting a RC structure with a timber structure per sqm varies from different storeys. In Taiwan, sustainability assessment of wood and wood products also attracted research efforts. Li and Xie investigated building professionals’ attitudes towards the use of wood in building design in Taiwan and found professionals including architects, engineers, and interior designers had positive viewpoints of wood construction in Taiwan. They considered wood-framed buildings would be the future trend of sustainable construction in low-rise buildings. As for study of environmental impacts of wood structures, Tu examined 57 reference houses of wooden platform constructions in Taiwan and estimated the average amounts of materials in the buildings. He found that carbon dioxide released in reinforced concrete structures and steel structures is 4.2 times and 3.6 times more than in wood structures, respectively. Previous studies mainly focus on the sustainability assessment of wood structures or wood used in construction by comparing them with other structure forms or materials such as concrete and steel. However, these studies overlooked the significance of the sources of imported wood in their sustainability assessment. The sustainability performance of wood imported from different sources would vary dramatically due to the differences in the wood production process, energy structure, and transportation distance among these imported wood sources. This being the case, the influence of imported wood from different regions should be taken into consideration when conducting sustainability assessment for wood used in construction in import-dominated cases. As wood importation is a cross-regional mobility issue of materials, it involves various energy supply systems, equipment, manufacturing processes, transportation systems, and different types of efficient operations. A specialised assessment process is needed to address this issue by conducting a literature review, an actual investigation, and forming proper assumptions. Taiwan, a typical region where the wood used in the construction industry largely relies on the import market, is chosen as the case study. In this study, wood consumed in construction in Taiwan is manufactured overseas and then transported to Taiwan. The imported wood regions include the US (Pacific Northwest, Southeast, Inland Northwest, and Northeast), Canada (West and East), China (Northeast and Southwest), Malaysia, Sweden, Russia, Brazil, Australia, and New Zealand. Five main lifecycle stages of wood are considered in the assessment: •Wood harvesting. The first step in the wood industry is wood harvesting. This step is generally called pre-processed and includes the following processes: felling wood, skidding wood to the landing area, and debarking. Only a small amount of energy consumption is needed, because wood harvesting (logging) requires simple mechanical tools such as electric saw. •Transporting wood from forest region to sawmill. The harvested wood is transported from forest region to sawmill. In order to reduce the capital in transportation related to costs, most sawmills are built to avoid long distance from forest logging area. However, different regions have their own conditions, and, therefore, assumptions will be made according to the case in the following analysis. •Manufacturing of wood in sawmill. This step indicates how the wood material is manufactured and processed in sawmill. As for energy use, the predominant use of electricity reflects a proportionally larger use of mechanical processes, such as sawmilling, chipping, planting, and peeling over processes, which need heat for processes such as drying, gluing, and pressing, in which fuel oil is used as the major source of thermal energy. •Transporting wood from sawmill to marine port. Road transportation is the most common way to move processed wood from sawmill to marine port. Normally, transportation from local sawmill or factory to an international port may be long distance. There is also insufficient information in different local regions, and environmental impact of this stage is difficult to estimate. Some necessary assumptions related to road transportation will be made for further analysis. •Transporting wood form marine port to Taiwan. In Taiwan, marine transportation is an important part of its economy due to its running business with other countries. Wood material is always transported by cargo ship from foreign regions to Taiwan’s international port. The environmental impact of marine transportation is complicated and difficult to analyse because of the complexity of collecting data in Taiwan. Researchers from other countries have developed a database of international marine environmental impacts, which could be used to calculate the energy consumption and carbon dioxide emissions during long ship journeys when the vessel is loaded with goods. Harvesting & Manufacturing Of Wood As analysed before, wood harvesting requires simple mechanical tools such as the electric saw, which consumes small amount of energy. Electric power and fuel oil are two major energies for the mechanical forest industry. Investigations have shown that electricity accounts for 40–50 percent of the industry’s energy needs in a series of mechanical process of sawmilling, chipping, planting, and peeling. Heat is also needed for processes such as drying, gluing, and pressing, in which fuel oil is used as the major source of thermal energy. The manufacturing process of wood from overseas is very different, especially in various regions, and thus the evaluation of environmental impact of this process is very complex. It is necessary to adopt some possible measures to estimate the environmental impact of wood. Energy consumption of wood manufacture is strongly related to its regional conditions. So, local data for each region is the most essential information required for the evaluation. In this study, data of energy consumption and carbon dioxide emissions during wood harvesting and manufacturing is collected by literatures from countries such as the US, Canada, Sweden, Austria, and New Zealand. Since related data is not available in China, Malaysia, Russia, and Brazil, estimative values from the Food and Agricultural Organization (FAO) report are applied in the evaluation. Inland road transportation includes two parts: transporting wood from forest region to sawmill and transporting wood from sawmill to marine port. Inland transportation is an important factor that contributes to energy consumption and carbon dioxide emissions during the life cycle of wood. There is no global average estimative data concerning the environmental impacts of road transportation, and the data of each region differs. Information from the Transport Canada Database is taken as estimative values in the study. Based on the author’s interviews with truck companies, heavy duty vehicle (HDV, 33,000 lbs) is identified as the main vehicle transporting wood inland. Vehicles lighter than 33,000 lbs are not suitable to carry wood due to their large size and the heavy weight of wood. In order to reduce transportation costs, most sawmills are built to avoid long distance movement from the forest logging area. Sawmill location is based on information from the Sawmill Database, which provides details about the latest and most major sawmills around the world. The distance from sawmills to marine ports differs greatly, and the average distance from sawmill to port is taken as an estimative value in road transportation. Environmental impact of marine transportation cannot be ignored in the assessment. In order to estimate the energy consumption and carbon dioxide emissions of marine transport, the methods used by the Network for Transport & Environment (NTM) are adopted in this study. NTM is a non-profit organization initiated in 1993 that aims to establish a common base of values on how to calculate environmental performance for various modes of transport. As is known, all ships are individual with different characteristics. The data provided by NTM is not exact for any given ship but comprises values measured and calculated over a great number of ships and engines. The boundary of marine transport routes limits only from port to port. The destination of marine transportation of wood is the port of Kaohsiung, which is the largest international port in Taiwan. There are thousands of vessel companies sailing from foreign countries to Taiwan, and the exact data of vessels is difficult to obtain. One of the largest vessel companies Evergreen Marine Corp (EMC), which deals with cargo container ships, provides marine information for analysis. Besides, the information of marine routes and transportation distances can be estimated from Marine Traffic. Sustainability Performance Of Importing Wood Importing wood from Canada, Australia, and New Zealand to Taiwan demands relatively lower amount of energy than it does from other regions. Specifically, importing wood from Canada (West) demands the lowest amount of energy (2095 MJ/cubic metre), while importing wood form Brazil consumes the highest amount of energy (5356 MJ/cubic metre), because the wood that comes from Brazil involves longer routes by road and marine transportation when compared with other wood resources. Compared with wood from Canada, the wood that comes from US has relatively higher energy consumption, such as wood from Southeast region (4824 MJ/cubic metre) and from Pacific Northwest region (4343 MJ/cubic metre). This is because energy consumption in manufacturing wood in US is much higher than that in Canada. On the other hand, it can be also found that there is no great difference of embodied energy when wood is imported from Sweden, China (Northeast and Southwest), and Malaysia, presenting at around 3500 MJ/cubic metre. Therefore, Canada, Australia, and New Zealand are the most sustainable importing sources for wood used in Taiwan’s construction sector according to energy consumption. Total embodied energy consumption of imported wood to Taiwan (MJ/cubic metre). It is interesting to note that the carbon dioxide emissions generated from importing wood from Sweden are significantly lower than those from other regions, although the energy consumed during the importing process is relatively high. This is because more renewable energy is applied in electricity production in Sweden, thus mitigating carbon emissions in industrial manufacturing process. Wood from the US (Southeast) contributes to the highest amount of carbon dioxide emissions (396 kg/cubic metre), followed by wood from Brazil (337 kg/cubic metre). The carbon emissions of imported wood from the US to Taiwan vary from different regions, ranging from 260 kg/cubic metre to 396 kg/cubic metre. The carbon dioxide emissions of importing wood are quite similar across regions such as Canada, Australia, and New Zealand due to similar energy consumptions. In Asian regions, wood from China contributes to higher carbon dioxide emissions than wood from Malaysia, because the portion of fossil fuel accounts for the major part in electricity production, thus leading to higher carbon dioxide emissions. If construction sector in Taiwan seeks to import wood from overseas with lower carbon dioxide, Sweden, Canada, Australia, and New Zealand would be most sustainable importing sources based on above analysis. Total embodied carbon dioxide emissions of imported wood to Taiwan (kg/cubic metre). Relative Distributions Of Sustainability Performances As for energy consumption, wood harvesting and inland transportation from forest to sawmill show less contribution to most of the regions except Brazil, where the inland transportation from forest region to sawmill contributes to 21 percent of total energy use due to very long trip (around 700 km). In most of regions (US, Australia, New Zealand, Sweden, China, Malaysia, and Russia), wood manufacturing process accounts for the major energy consumption (from 57 percent up to 82 percent). Relative performance of embodied energy of imported wood to Taiwan. Relative performance of embodied carbon dioxide emissions of imported wood to Taiwan. Besides, a long trip of marine transportation also significantly contributes to energy consumption. When wood is imported from US to Taiwan, energy used in marine transportation from four American regions (PSW, SE, INW, and NE/NC) vary from 14 percent to 23 percent. However, if the wood is transported from Canada, the portion of energy consumption is much higher than that in US. This is because manufacturing process in Canada consumes relatively less energy, while the marine transportation is quite similar to its competitor. For example, in the case of wood from the east of Canada, marine transportation reaches 45 percent of total energy consumption. Comparatively, the portions of energy consumed in marine transportation are less significant when importing from nearby Asian regions such as Malaysia and China, accounting for less than 10 percent. In terms of carbon dioxide emissions, manufacturing process releases the greatest amount of emissions in most of the regions studied except Sweden, where carbon dioxide emissions in manufacturing process account for only 16 percent, as conventional fossil fuel in primary energy distribution in Sweden in 2014 accounts for nine percent, releasing only small portions of carbon dioxide emissions. When wood is imported from Sweden, marine transportation accounts for 66 percent of total emissions due to very long transportation distance (20,804 km) and the lower emission level during manufacturing. Wood harvesting and road transportation from forest source to sawmill make a smaller contribution to carbon dioxide emissions (less than 11 percent) in most of the regions in this study. This finding does not translate to Brazil, as it requires long distance for road transportation. Marine transportation is another major factor that contributes to the carbon dioxide emissions as well. For example, Marine transportation of wood from North America (the US and Canada) contributes to 14 percent to 45 percent of total carbon dioxide emissions. By contrast, the carbon dioxide emissions of marine transportation of wood from Asian regions (Malaysia and China) are relatively less significant. Based on previous analysis, the most crucial factors of sustainability performance of importing wood to Taiwan have been be identified. Although many efforts have been made to collect data in this study, the uncertainties of data could not be avoided due to the changing reality. Therefore, the author would suggest that among five factors, only a relative percentage of more than 10 percent is to be considered essential in the analysis. This means that if wood is imported to Taiwan for use, the collection of these crucial data becomes necessary for analysing the environmental impact. Manufacture is a critical factor for all regions, while harvesting is not significant for all. Marine transport is a crucial factor for the most of regions except China and Malaysia. Embodied energy consumption and carbon dioxide emissions are two important indicators for assessing sustainable performance. The analysis of sustainability performance of imported wood from different regions can provide scientific information and results for building construction professionals in many wood limited countries or regions such as Japan, South Korea, and Taiwan that enable them to select more sustainable wood resources. The sustainability performance of importing wood from different sources could be influenced by multiple factors. For energy consumption, importing wood form Brazil consumes the highest amount of energy, because the wood that comes from Brazil is involved in longer routes by road and marine transportation when compared to other wood resources. However, when compared with wood from Canada, the wood that comes from US has relatively higher energy consumption, although the distance between Canada and Taiwan is longer than that between US and Taiwan. This could be explained by the reason that the energy consumption in manufacturing wood in US is much higher than that in Canada. For the carbon dioxide emission performance, the energy distribution of wood importing source could have a significant influence on sustainable performance. From the results, it is found that the carbon dioxide emissions generated by importing wood from Sweden are significantly lower than those from other neighbourhood regions like China and Malaysia. This is because more renewable energy is applied in electricity production in Sweden, while the portion of fossil fuel accounts for the major part of electricity production in these Asian regions. Therefore, it is necessary to systematically consider multiple factors when conducting sustainable performance studies. Although the final results of sustainable performance are quite complex, generally, environmental impacts of wood used in construction sector could be minimised by selecting appropriate import regions with shorter transportation distance, as marine transportation contribute a large part of environmental impacts on the lifecycle of wood. These results could determine whether it is possible to use some wood resources locally for these regions to avoid environmental impacts generated from long distance transportation. This implication could have multiple benefits. Firstly, environmental impacts due to long journeys can obviously be avoided if the local wood is harvested and used in construction. Secondly, this solution can meet the requirements of sustainability of resource utilisation in a self-sufficient environment, and, in turn, reduce the consumption of wood in other regions, if wood used in construction is available locally. Finally, it could be helpful to increase the domestic supply of wood if possibility of reusing and recycling wood products in a dominant market could be also taken into consideration. In consequence, the possibility of using local wood resources in a sustainable manner should be reconsidered by the authority in regions with less sufficient wood resources. Although this study has considered embodied energy consumption and carbon dioxide emissions in the sustainability assessment, more indicators could be developed to assess the sustainable wood management in construction industry in future studies. These include the extent of forest resource, biological diversity, forest health and activity, productive functions for forest resources, protective functions of forest resources, and social-economical functions. These may help to identify whether wood used in construction sector could fulfil the goal of sustainable construction.
1st Grade students just finished a project based on the book "Owl Moon" by Jane Yolen and John Schoenherr. Students learned how to draw an owl using basic shapes. We used watercolor paints in order to finish the owls. The night sky behind the owl was made from flicking white paint onto a black paper using a large paint brush. Students finished the project by gluing on branches and a moon, then cutting out and gluing their owl onto their nighttime background.
From small drills to powering a jet’s engine, air compressors are everywhere. You can find massive compressors air units and even the smallest ones because these devices come in every shape and size. They are rapidly replacing the belt-an-pulley and other mechanical systems that were once in high-demand for powering equipment. Since the end of the 18th century, these devices are making their way into every sector of the world. These devices can power even the most massive units by working on the pneumatic system as it is that powerful. Air compressors can be found in homes, workplaces, manufacturing industries, and even automobile workshops. They are so popularly used due to the incredible benefits they boast. The most significant advantage is that air compressors use the air from the atmosphere and do not require air from a gas station to operate. It can power heavy equipment and even work perfectly for powering small tools such as drills, nail guns, and HVAC systems. There are two basic types of compressors, including positive displacement air compressors and dynamic air compressors. These are further sub-categorized into the reciprocating, rotating screw, and centrifugal air compressors. All these offer a unique set of benefits and are perfect for use for different applications. So, how does an air compressor work? Working of an Air Compressor To understand how an air compressor works, you must have an understanding of two processes, which are compression and release operations. These devices work on pneumatic-system and are very efficient as powering a wide variety of tools and appliances. To simply state, an air compressor works on the principle that when an amount of air compresses, it occupies a smaller volume and gets highly pressurized. A compressor may use a simple piston-cylinder system that is a reciprocating air compressor to power a tool. Or these may use a rotary screw or a rotating impeller to compress the air and convert the atmospheric air into high-pressure and low-volume air. An air compressor works very efficiently, but it requires a power source to start operating. Without a power source, air compressors will not work. In terms of power source, there are two categories of air compressors. One category is the gasoline-powered air compressors, and the second one is electric-powered air compressors. Gasoline-powered air compressors are perfect for applications in rural areas where electricity is not readily available. We can understand the working of an air compressor by taking into consideration the simplest air compressor type, and that is the reciprocating air compressor, which uses a simple piston-and-cylinder mechanism. The basic components of a reciprocating air compressor include a crankshaft, connecting rod, cylinder, piston, and a valve head. There is an inlet on one end of the compressor, and the discharge valve is at the opposite end. The inlet valve’s function is to inject or suck air from the atmosphere, and the discharge valve provides a way for compressed air to exit the compressor. A piston moves back and forth in the cylinder and creates a vacuum in the compressor. When a piston retracts, the air from the atmosphere is sucked into the cylinder from the inlet. When the piston extends, the sucked air is compressed, which then passes through the discharge valve. At this stage, the inlet valve is shut to prevent the compressed air from escaping through it. How does an air compressor with oil work? It is crucial to understand how the lubrication system works as it helps in maintaining the air compressor. There are two types of lubrication pumps that are found in air compressors. These types include oil-lubricated pumps and oil-free pumps. In oil-lubricated pumps, the lubrication system works on the concept of an oil bath, which is the splashing of oil on the cylinder and bearings to lubricate the compressor. It is also known as the oil-flooded lubrication. In this system, a small amount of oil does get mixed with compressed air. Moreover, these pumps require detailed maintenance. The second type of lubrication system is the oil-free lubrication. The bearings in the air compressor are treated with lubrication that lasts long. These require light maintenance but are much noisier than the oil-lubricated system. The Safety of Air Compressors Safety is an essential part of the air compressor operations. When it comes to an understanding of the working of an air compressor, we need to understand its safety functions as well. The two most significant elements make an air compressor safe to use, and these include pressure switches, overload valve, and air regulators. The pressure switches in air compressors are an essential part of the system. These allow the motor that is powering an air compressor to turn on or turn off based on the pressure. The pressure of the air will not exceed the cut-out point due to pressure switches. As soon as the pressure reaches the specific point, that is the cut-out point, the pressure switch sends a signal to the motor, and it turns off. It saves the air compressor from overloading, and it is what makes the compressors safe for use. Compressor Relief Valves Air compressors have one drawback, and it can turn into a deadly situation. The air compressor explosion is massive and turns into a complete disaster. The safety valve, also known as the compressor relieve valve, is a backup for pressure switches. When the pressure switches fail to turn off the motor, the compressor relief valve opens and relieves the compressor of excessive air. It releases air until the pressure of the air reaches a safe threshold. Once it reaches a safe value, the compressor relief valve closes. You will find thousands of air compressor models by hundreds of different manufacturers. You must choose the one that is suitable for your use. Choosing the wrong air compressor can increase the risks of explosions and air compressor failures! When investing in an air compressor, make sure the money does not go in vain!
Before Extreme Heat To prepare for extreme heat, you should: - To begin preparing, you should build an emergency kitand make a family communications plan. - Install window air conditioners snugly; insulate if necessary. - Check air-conditioning ducts for proper insulation. - Install temporary window reflectors (for use between windows and drapes), such as aluminum foil-covered cardboard, to reflect heat back outside. - Weather-strip doors and sills to keep cool air in. - Cover windows that receive morning or afternoon sun with drapes, shades, awnings, or louvers. (Outdoor awnings or louvers can reduce the heat that enters a home by up to 80 percent.) - Keep storm windows up all year. - Listen to local weather forecasts and stay aware of upcoming temperature changes. - Know those in your neighborhood who are elderly, young, sick or overweight. They are more likely to become victims of excessive heat and may need help. - Be aware that people living in urban areas may be at greater risk from the effects of a prolonged heat wave than are people living in rural areas. - Get trained in first aid to learn how to treat heat-related emergencies.
Key Stage 1 During Key Stage 1, Art and Design is about expanding children’s creativity and imagination through providing art, craft and design activities relating to the children’s own identity and experiences, to natural and manufactured objects and materials with which they are familiar, and the locality in which they live. • Children will explore the visual, tactile and sensory qualities of materials and processes and begin to understand and use colour, shape and space, pattern and texture, to represent their own ideas and feelings. • Children will focus on the work of artists, craftspeople and designers by asking and answering questions, such as: ‘What is it like?’ ‘What do I think about it?’ Click the link to see some ideas to try at home:
Two invertebrates, a roundworm and an arthropod, share the spotlight in this box. Both have the characteristics common to all model organisms: small size, easy cultivation in the lab, and rapid life cycles. Each has provided crucial insights into life’s workings. This roundworm, a soil inhabitant, is arguably the best understood of all animals. This was the first animal to have its genome sequenced (in 1998), revealing about 18,000 genes. A small sampling of the contributions derived from research on C. elegans includes the following: An adult C. elegans consists of only about 1000 cells. Because the worm is transparent, biologists can watch each organ form, cell by cell, as the animal develops from a zygote into an adult. Eventually, researchers hope to understand every gene’s contribution to the development of this worm. Programmed cell death, or apoptosis, is the planned “suicide” of cells as a normal part of development. Researchers already know which cells die at each stage. Learning about genes that promote apoptosis may help researchers to better understand cancer, a family of diseases in which cell division is unregulated. The first C. elegans gene to be cloned, revealed the amino acid sequence of one part of myosin, a protein required for muscle contraction.∙ Nematodes provide a good forum for preliminary testing of new pharmaceutical drugs. For example, researchers might identify a C. elegans mutant lacking a functional insulin gene, then test new diabetes drugs for the ability to replace the function of the missing gene. Worms with mutations in some genes have life spans that are twice as long as normal. Insights into aging in C. elegans may eventually help increase the human life span. Origin of sex C. elegans is a hermaphrodite, so the same individuals produce both sperm and eggs. They can also reproduce asexually. Section 9.9 describes how hermaphroditic nematodes have helped biologists understand the evolution of sexual reproduction.
A-level Biology/Central Concepts/Photosynthesis Photosynthesis is the method that plants and photoautotrophes utilize light energy to produce ATP via photophosphorylation in order to anabolise sugars. It is an energy transfer process, and almost all energy transferred to ATP in all organisms is derived from light energy trapped by autotrophs. The equation for it is listed below; - 6 CO2(g) + 6 H2O(l) + photons → C6H12O6(aq) + 6 O2(g) - carbon dioxide + water + light energy → glucose + oxygen Two reactions are involved in photosynthesis, the light dependent and independent. - 1 Light Energy - 2 Light Dependent - 3 Light Independent - 4 Leaf Structure/Function - 5 Rate of Photosynthesis Light energy is used to split H2O into H and O in a process called photolysis, and is trapped by photosynthetic pigments. These pigments fall into 2 categories: chlorophylls and carotenoids. Chlorophylls absorb mainly red and blue-violet light, reflecting green light, whilst carotenoids absorb mainly the blue-violet light. These spectra, where the pigments can absorb light energy are known as absorption spectra (singular: spectrum). An action spectrum is a graph displaying the rates of photosynthesis at different wavelengths of light. See for an action spectrum. The shorter the wavelength, the greater the energy contained. Photosynthesis converts light energy to chemical energy through exciting electrons within the pigments. The 2 photosynthetic pigments fall into 2 sub-categories: i)primary pigments and ii)accessory pigments. The primary pigments comprises 2 types of "chlorophyll a" (with slightly different absorption peaks), whereas accessory pigments consists of other types of "chlorophyll a", "chlorophyll b" and carotenoids. All the pigments are arranged in photosystems, and several hundred accessory pigments surround a primary pigment, so that the light energy absorbed by accessory pigments can be transmitted to the primary pigments. The primary pigment is known as the reaction centre. Light-dependent reactions are the synthesis of ATP from ADP+Pi and the breakdown of H2O using light energy to give protons. This is the process by which ATP is synthesised from ADP+Pi using light energy,and can be either cyclic or non-cyclic. This type of photophosphorylation involves only photosystem I. When light is absorbed by photosystem I and passed to "chlorophyll a (P700)", an electron in this chlorophyll molecule is excited to a higher energy level and then captured by an electron acceptor. It is then passed back to a "chlorophyll a" molecule through a cycle of electron carriers (or electron transport chain/ETC), which at the meanwhile, release energy to synthesise ATP from ADP+Pi (phosphorylation) by a mechanism known as chemiosmosis. This ATP later enters the light independent stage. Non-cyclic photophosphorylation utilises both photosystems in a "Z-Scheme". See picture above. Light is absorbed by both photosystems I and II, and excited electrons are passed from both primary pigments to electron acceptors as well as electron transport chain before exiting the photosystems positively charged. Photosystem I receives electrons from photosystem II, which instead replenishes electrons from the photolysis of water. In this chain, as in cyclic, ATP is synthesised using the energy lost during the phase of electron transport chain. Photolysis of water Photosystem II has a water-splitting enzyme which catalyzes the breakdown of H2O, producing O2 as a waste product. The H+ combine with e- from photosystem I and the carrier molecule NADP to give reduced NADP. This then passes to the light independent reactions, and is used in the synthesis of carbohydrates. In the light-independent stage, RuBP (5-C) combines with one CO2 molecule, that then splits into 2 glycerate-3-phosphate (GP) molecules (3-C), which is finally reduced to 2 triose phosphates (3-C). 1 triose phosphates (3-C) feed back in to the cycle to regenerate RuBP (5-C), 1 is polymerised into starch. The products of this cycle are used to form glucose, amino acids or lipids. The leaf is the main site of photosynthesis in most plants - it has a broad, thin lamina and an extensive network of veins. The functions of a leaf are best achieved by containing photosynthetic pigments, absorbing carbon dioxide (and disposing of oxygen) and have a water and solute supply/transport route. The leaf itself has a large surface area and arrangement such that it can absorb as much light as possible.The green color is from chlorophyll, where absorbs light energy to drive the synthesis of organic molecules in the chloroplast. Chlorophyll’s pigments absorb visible light. The cuticle, on the upper epidermis provides a watertight layer for the top of the plant, and together with the epidermis (thin, flat, transparent cells) allow light through to the mesophyll below, and protect the leaf. The palisade cells are the main site of photosynthesis, as they have many more chloroplasts than spongy mesophylls, and also have several adaptions to maximise photosynthetic efficiency; - Large Vacuole - Restricts chloroplasts to a layer near the outside of the cell where they can be reached by light more easily. - Cylindrical Arrangement - They are arranged at right angles to the upper epidermis, reducing the number of light-absorbing cross walls preventing light from reaching the chloroplasts. This also allows long-narrow air spaces between them, providing a large surface area for gaseous exchange. - Movement of chloroplasts - Proteins can move the chloroplasts within cells to absorb maximum light. - Thin cell walls - to allow gases to easily diffuse through them. These cells are the main site for gaseous exchange, and contain fewer chloroplasts, and will only photosynthesise at high light intensities. The irregular packing of the cells provides a large surface area for gaseous exchange. CO2 enters and O2 exits the leaf through stomata. The lower epidermis is basically the same as the upper, except that there are many stomata in the lower epidermis, which are pores in the epidermis through which gaseous exchange occurs. Each stomata is bounded by two guard cells, and changes in the turgidity of theses guard cells cause them to change shape so that they open and close the pore. If the guard cells gain water, the pore is open, and vice-versa. Osmosis controls how much water is in the guard cells, and to have more end the water potential of the guard cells must belowered via the active removal of hydrogen ions, in an active transport process. The actual photosynthetic organelle is chloroplast - an image of a chloroplast is on the right. As you can see, 1,2 and 3 are the envelope of two phospholipid membranes. The system of membranes (4) running through the cell is the stroma, and provides space for the thylakoids, a series of flattened fluid-filled sacs (5,6), which form stacks called grana (7). This membrane system of the grana provides a large surface area for reactions, and as said before, the pigment molecules are also arranged in light-harvesting clusters with primary pigments and accessory pigments. Chloroplasts are found in cells of mesophyll, the interior tissue of the leaf. The chlorophyll is in the membranes of thylakoids. Thylakoids stack in grana The stroma is the site of the light-independent reactions, contain the Calvin cycle enzymes, sugars and organic acids. The ribosome (10), DNA (11) and some lipids (12) can also be seen. Rate of Photosynthesis The main factors that affect the rate of photosynthesis are light intensity, temperature and carbon dioxide concentration. at constant temperature, the rate of photosynthesis varies with light intensity, increasing at first but at higher light intensities this increase levels off. The effect on the rate of photosynthesis at constant light intensities and varying temperatures - at high light intensities, the rate of photosynthesis increases as temperature does (to a limited range), but at low light intensities temperature does not make much difference. Dehydration is one of the most common problems for plants, and it sometimes requires trade-offs with other metabolic processes, like photosynthesis. On hot and dry days, plants close stomata to conserve water but it then limits the ability for photosynthesis. The closing of stomata reduces access to the CO2 and causes O2 to build up. Plants have developed some mechanisms to solve this problem. -In most plants (C3 plants), initial fixation of C02 via rubisco and it forms a three-carbon compound. Rubisco adds O2 instead of CO2 in the Calvin cycle during photorespiration. Photorespiration consumes O2 and releases C02 with no producing ATP and carbohydrate. -C4 plants minimize the cost of photorespiration by incorporating CO2 into four carbon compounds in mesophyll cells. Enzyme PEP carboxylase is required during this process. PEP carbonxylase has a higher affinity for CO2 than rubisco, so it can fix CO2 even when CO2 concentrations are low. -CAM plants are those that use CAM to fix carbon. They open their stomata at night, and this incorporates CO2 into organic acids. During the day, they close their stomata to reduce the chance of dehydration and CO2 is now released from organic acid in the calvin cycle. Neil A. Campbell, Jane B. Reece "Biology 8th edition"
Finding and Selecting Articles Now it's time to put your searching knowledge to work. Imagine that you are preparing for a debate on your topic. You need to find information that will support your arguments as well as inform you about the topic as a whole. So... use your mad search skills to find the best articles possible! In each of the following databases, find one full-text article on your topic: Go to NoodleBib and begin a bibliography you will share with a class called “Databases.” Write a citation for each article in APA format. Under “Online retrieval information,” enter a DOI if one is provided. If not, use the database entry URL (example: http://web.ebscohost.com). Skim the articles and, for each citation, write 3-4 sentence annotations that: - summarize the content of the article - describe how the article will be useful to you in preparing for your debate - Describe one new thing you learned about this TOPIC just from skimming the articles and writing the annotations. - Which database do you prefer using? ______ MAS Ultra School Edition (from Ebsco) ______ Academic OneFile (from Infotrac/Gale) ______ Lexis-Nexis Academic Universe Why do you prefer it? Be specific! Don't just say "it pulled up the best results" - talk about how the search process worked for you or the nature of the sources, etc. 5. Use the A-Z list of UIUC online journals and newspapers link to find the database that contains the full text of the Los Angeles Times for the year 1970. The name of the company that produces the database is: Select an article about Earth Day from the April 22, 1970 issue and create a citation on your NoodleBib bibliography. No annotation necessary for this one!
Natural pearls are nearly 100% calcium carbonate and conchiolin. It is thought that natural pearls form under a set of accidental conditions when a microscopic intruder or parasite enters a bivalve mollusk and settles inside the shell. The mollusk, irritated by the intruder, forms a pearl sac of external mantle tissue cells and secretes the calcium carbonate and conchiolin to cover the irritant. This secretion process is repeated many times, thus producing a pearl. Natural pearls come in many shapes, with perfectly round ones being comparatively rare. Typically, the build-up of a natural pearl consists of a brown central zone formed by columnar calcium carbonate (usually calcite, sometimes columnar aragonite) and a yellowish to white outer zone consisting of nacre (tabular aragonite). In a pearl cross-section such as the diagram, these two different materials can be seen. The presence of columnar calcium carbonate rich in organic material indicates juvenile mantle tissue that formed during the early stage of pearl development. Displaced living cells with a well-defined task may continue to perform their function in their new location, often resulting in a cyst. Such displacement may occur via an injury. The fragile rim of the shell is exposed and is prone to damage and injury. Crabs, other predators and parasites such as worm larvae may produce traumatic attacks and cause injuries in which some external mantle tissue cells are disconnected from their layer. Embedded in the conjunctive tissue of the mantle, these cells may survive and form a small pocket in which they continue to secrete calcium carbonate, their natural product. The pocket is called a pearl sac, and grows with time by cell division. The juvenile mantle tissue cells, according to their stage of growth, secrete columnar calcium carbonate from pearl sac's inner surface. In time, the pearl sac's external mantle cells proceed to the formation of tabular aragonite. When the transition to nacre secretion occurs, the brown pebble becomes covered with a nacreous coating. During this process, the pearl sac seems to travel into the shell; however, the sac actually stays in its original relative position the mantle tissue while the shell itself grows. After a couple of years, a pearl forms and the shell may be found by a lucky pearl fisher. "Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Donec tellus Nulla lorem Nullam elit id ut elit feugiat lacus. Congue eget dapibus congue tincidunt senectus nibh risus Phasellus tristique justo. Justo Pellentesque Donec lobortis faucibus Vestibulum Praesent mauris volutpat vitae metus. Ipsum cursus vestibulum at interdum Vivamus nunc fringilla Curabitur ac quis. Nam lacinia wisi tortor orci quis vitae. Commodo laoreet semper tincidunt lorem Vestibulum nunc at In Curabitur magna. Euismod euismod Suspendisse tortor ante adipiscing risus Aenean Lorem vitae id. Odio ut pretium ligula quam Vestibulum consequat convallis fringilla Vestibulum nulla. Accumsan morbi tristique auctor Aenean nulla lacinia Nullam elit vel vel. At risus pretium urna tortor metus fringilla interdum mauris tempor congue
High heat flow under Deccan Traps causes uplift of Western India doi:10.1038/nindia.2009.87 Published online 23 March 2009 The Earth's curst under western India is unusually hot and is in 'a state of continuous uplift', according to geologists of the National Geophysical Research Institute (NGRI) in Hyderabad. They believe this discovery may help explain the occurrence of devastating earthquakes in the peninsular India witnessed in recent years. Major earthquakes in the Indian subcontinent are the result of stresses built up due to the collision of the Indian plate with the Asian plate that gave rise to the Himalayas. These earthquakes had taken place along the plate boundary. The peninsular shield is supposed to be relatively seismically quiet. However it has witnessed several damaging earthquakes in Koyna (1967), Latur-Killari (1993), Jabalpur (1997) and Bhuj (2001). Despite a large number of investigations, the cause of their occurrence is unclear. Some scientists believe that Koyna earthquakes are induced by a nearby reservoir while others believe them to be of tectonic origin. Now a detailed study including the characterization of the rocks sandwiched between the 'Deccan Traps' and the earth's crust in Latur and Koyna regions has shed new light on the origin of 'intraplate' earthquakes, the NGRI scientists report1. The Deccan Traps refer to the pile of volcanic rocks, 1 to 2 kilometres thick, that covered nearly half a million square kilometre area of central and western India following one of the largest eruptions on Earth 65 million years ago. What underlies this volcanic rock cover had remained an enigma and the NGRI team set out to crack it in the hope of looking for clues for earthquakes in the peninsular India. "It has been assumed all these years that these volcanics are resting over granitic-gneissic crust but our study has overthrown this notion," Gopalakrishnarao Parthasarathy of NGRI, one of the authors of the paper told Nature India. Granitic gneiss (pronounced 'nise') is an igneous rock and is formed from water-rich magma. But the NGRI findings indicate that the basement rock is not granitic but 'granulite' (a deeper rock) with very low water content and rich in carbondioxide (CO2) containing as much as two grams of CO2 for every 100 grams of the mineral grains. Granulites are formed at temperatures around 700°C and pressure range 6-12 kilobars corresponding to a depth of 30km in the earth's inetrior. The NGRI study was an attempt to examine and compare the prevailing geologic and tectonic structure of Latur and Koyna regions by analyzing active and passive seismic, gravity, heat flow and magneto-telluric (MT) data. (MT is an electromagnetic method used to map the spatial variation of the Earth's resistivity by measuring naturally occurring electric and magnetic fields at the Earth's surface). This multi-disciplinary study revealed, for the first time, the presence of high density 'granulite' rocks just below the Deccan volcanic cover, the researchers claim. "This implies that almost entire thickness of the granitic-gneissic upper crust in this part of India has been eroded even before the eruption of Deccan volcanism took place," the authors say. "This inference strengthens our belief that this region may have been in a state of continuous uplift and erosion during pre-Deccan trap eruptive period and the uplift probably continues even now," they say. Such an evolutionary process can alone bring the earth's crust below the Deccan Traps to significantly shallower levels, the scientists report. This has been confirmed by borehole investigations in which a sediment thickness of only 6–8 metres was found between the crystalline basement rock (earth crust) and overlying Deccan volcanics. What can cause such massive uplifts of the earth's crust? The researchers say their calculations indicate a high input of heat flow (29–36 milli Watts per square meter) from the mantle below both Latur as well as Koyna. "Such high input of heat from the mantle is unheard of," says Om Prakash Pandey, a heat flow expert and first author. "High input of subcrustal heat signifies that the overlying crustal column is subjected to a much higher temperature than comparable terrains elsewhere." According the researchers, stresses caused due to ongoing uplift and high input of heat flow from the mantle is continuously accumulating in this part of the Earth's crust beneath the Deccan Traps within which earthquakes tend to nucleate. "Such localised stresses act over and above those generated by collision of Indian sub-continent with Eurasia, and thus "have added an entirely new dimension to our understanding of the occurrence of intraplate earthquakes within a stable continental region." - Pandey, O. P. et al. Upwarped high velocity mafic crust, Subsurface Tectonics and causes of intra plate Latur-Killari (M 6.2) and Koyna (M 6.3) earthquakes, India – A comparative study, J. Asian Earth Sci. 34, 781-795 (2009)
According to the Wikipedia entry: A grain is a unit of measurement of mass, and in the troy weight, avoirdupois, and Apothecaries' system, equal to exactly 64.79891 milligrams. It is nominally based upon the mass of a single virtual ideal seed of a cereal. From the Bronze Age into the Renaissance the average masses of wheat and barley grains were part of the legal definitions of units of mass. Rather, expressions such as "thirty-two grains of wheat, taken from the middle of the ear" appear to have been ritualistic formulas, essentially the premodern equivalent of legal boilerplate.:27 Another source states that it was defined as the weight needed for 252.458 units to balance a cubic inch of distilled water at 30 inches of mercury pressure and 62 degrees Fahrenheit for both the air and water. Another book states that Captain Henry Kater, of the British Standards Commission, arrived at this value experimentally. The grain was the legal foundation of traditional English weight systems, and is the only unit that is equal throughout the troy, avoirdupois, and apothecaries' systems of mass.:C-6 The unit was based on the weight of a single grain of barley, considered equivalent to 1 1⁄3 grains of wheat.:95 The fundamental unit of the pre-1527 English weight system known as Tower weights, was a different sort of grain known as the "wheat grain". The Tower wheat grain was defined as exactly 45⁄64 of a troy grain.:74 My powder measure is calibrated in grains based on the mass of that volume of 2fg GOEX Black Powder.
Did you know that a rich brown pigment used in 16th and 17th century Europe was actually made from mummies? Art historians join in for a strange, fun lesson! The Mayan peoples sure loved the color blue! In this fascinating art history lesson, discover the origin of Maya Blue and learn why it resists the elements. Are you curious about the deadly history behind one of the oldest pigments in the world? This art history lesson discusses the uses of Flake White paint! The history of one of the most expensive pigments that’s been prized since antiquity—called lapis lazuli—is covered in this art history lesson! The most expensive purple dye was reserved for royals, and was worth its weight in silver! As a result, Tyrian Purple is famous among art historians. - Recommended Recommended - History & In Progress History - Browse Library - Most Popular Library Get Personalized Recommendations Let us help you figure out what to learn! By taking a short interview you’ll be able to specify your learning interests and goals, so we can recommend the perfect courses and lessons to try next.Start Interview You don't have any lessons in your history. Just find something that looks interesting and start learning!
14.1 Wireless Concepts - GSM: Universal system used for mobile transportation for wireless network worldwide. - Bandwidth: Describes the amount of information that may be broadcasted over a connection - BSSID: The MAC address of an access point that has set up a Basic Service Set (BSS). - ISM band: A set of frequency for the international Industrial, Scientific, and Medical communities. - Access Point: Used to connect wireless devices to a wireless network. - Hotspot: Places where wireless network is available for public use. - Association: The process of connecting a wireless device to an access point. - Orthogonal Frequency-division Multiplexing (OFDM): Method of encoding digital data on multiple carrier frequencies. - Direct-sequence Spread Spectrum (DSSS): Original data signal is multiplied with a pseudo random noise spreading code. - Frequency-hopping Spread Spectrum (FHSS): Method of transmitting radio signals by rapidly switching a carrier among many frequency channels. - Wi-Fi refers to wireless local area networks (WLAN) based on IEEE 802.11 standard. - It is a widely used technology for wireless communication across a radio channel. - Devices such as a personal computer, video-game console, smartphone, etc. use Wi-Fi to connect to a network resource such as the Internet via a wireless network access point. - Installation is fast and easy and eliminates wiring through walls and ceilings. - It is easier to provide connectivity in areas where it is difficult to lay cable. - Access to the network can be from anywhere within range of an access point. - Public places like airports, libraries, schools or even coffee shops offer you constant Internet connections using Wireless LAN. - Security is a big issue and may not meet expectations. - As the number of computers on the network increases, the bandwidth suffers. - Wi-Fi enhancements can require new wireless cards and/or access points. - Some electronic equipment can interfere with the Wi-Fi networks. Wi-Fi Networks at Home and Public Places - Wi-Fi at Home: Wi-Fi networks at home allow you to be wherever you want with your laptop, iPad, or handheld device, and not have to make holes for or hide Ethernet cables. - Wi-Fi at Public Places: You can find free/paid Wi-Fi access available in coffee shops, shopping malls, bookstores, offices, airport terminals, schools, hotels, and other public places. Wireless Technology Statistics - Why Wireless Technology Matters? - More than half of all open Wi-Fi networks are susceptible to abuse. - There will be more than 7 billion new Wi-Fi enabled devices in the next 3 years. - 71% of all mobile communications flows over Wi-Fi. - By 2017, 60% of carrier network traffic will be offloaded to Wi-Fi. - A Wi-Fi attack on an open network can take less than 2 seconds. - 90% of all smartphones are equipped with Wi-Fi capabilities. Types of Wireless Networks |Amendments||Freq. (GHZ)||Modulation||Speed (Mbps)||Range (ft)| |802.11i||Defines WPA2-Enterprise/WPA2-Personal for Wi-Fi||"||"||"| |802.16 (WiMAX)||10-66||70-1000||30 miles| Service Set Identifier (SSID) - SSID is a token to identify a 802.11 (Wi-Fi) network; by default it is the part of the frame header sent over a wireless local area network (WLAN). - It acts as a single shared identifier between the access points and clients. - Access points contunuously broadcasts SSID, if enabled, for the client machines to identify the presence of wireless network. - SSID is a human-readable text string with a maximum length of 32 bytes. - If SSID of the network is changed, reconfiguration of the SSID on every host is required, as every user of the network configures the SSID into their system. - A non-secure access mode allows clients to connect to the access point using the configured SSID, a blank SSID, or an SSID configured as "any". - Security concerns arise when the default values are not changed, as these units can be compromised. - The SSID remains secret only on the closed networks with no activity, that is inconvenient to the legitimate users. Wi-Fi Authentication Modes Wi-Fi Authentication Process Using a Centralized Authentication Server - WarWalking: Attackers walk around with Wi-Fi enabled laptops to detect open wireless networks. - WarChalking: A method used to draw symbols in public places to advertise open Wi-Fi networks. - WarFlying: In this technique, attackers use drones to detect open wireless networks. - WarDriving: Attackers drive around with Wi-Fi enabled laptops to detect open wireless networks. Wi-Fi Chalking Symbols Types of Wireless Antennas - Directional Antenna: Used to broadcast and obtain radio waves from a single direction. - Omnidirectional Antenna: It provides a 360 degree horizontal radiation pattern. It is used in wireless base stations. - Parabolic Grid Antenna: It is based on the principle of a satellite dish but it does not have a solid backing. They can pick up Wi-Fi signals ten miles or more. - Yagi Antenna: Yagi is a unidirectional antenna commonly used in communications for a frequency band of 10 MHz to VHF and UHF. - Dipole Antenna: Bidirectional antenna, used to support client connections rather than site-to-site applications. Parabolic Grid Antenna - Parabolic grid antennas enable attackers to get better signal quality resulting in more data to eavesdrop on, more bandwidth to abuse and higher power output that is essential in Layer 1 DoS and man-in-the-middle attacks. - Grid parabolic antennas can pick up Wi-Fi signals from a distance of ten miles.
Graphene often gets discussed in gee-whiz applications like post-CMOS electronics, or solar cells that can provide extremely high electricity-to-light conversion ratios. However, it is perhaps in the more mundane aspects of our world that graphene is providing an important impact. The perfect example of this is graphene in concrete—a material that has been with us since the ancient Romans. In a feature at the beginning of the year, The Graphene Council reported on how the addition of graphene oxide is providing concrete with greater compressive and tensile strength. Now researchers at the University of Exeter in the UK have developed a technique for adding graphene to concrete that provides such a wide gamut of new and improved properties that some are predicting that it could revolutionize the construction industry. In research described in the journal Advanced Functional Materials, the University of Exeter researchers demonstrated that the addition of graphene to concrete could improve the material’s compressive strength by 149 percent. This compressive strength increase was accompanied with a 79 per cent increase in flexural strength, a 400 per cent decrease in water permeability, and improved electrical and thermal performance. The key to this development is that it is completely compatible with today’s large-scale production of concrete. It simply involves suspending atomically thin graphene in water. The resulting process keeps costs low and results in very few defects in the end product “This ground-breaking research is important as it can be applied to large-scale manufacturing and construction,” said Dimitar Dimov, a PhD student at the University of Exeter and the lead author of the research. “The industry has to be modernized by incorporating not only off-site manufacturing, but innovative new materials as well.” What may grab the headlines beyond its improved properties is that the graphene-enabled concreted appeals to so-called green manufacturing. “By including graphene we can reduce the amount of materials required to make concrete by around 50 per cent — leading to a significant reduction of 446 kilograms per ton of the carbon emissions,” said Monica Craciun, professor at Exeter and co-author of the research. “This unprecedented range of functionalities and properties uncovered are an important step in encouraging a more sustainable, environmentally-friendly construction industry worldwide.”
Figuring out function from bacteria's bewildering forms The constellation of shapes and sizes among bacteria is as remarkable as it is mysterious. Why should Spirochaeta halophila resemble a bedspring coil, Stella a star and Clostridium cocleatum a partly eaten donut? No one really knows. A new report in the Proceedings of the National Academy of Sciences by Indiana University Bloomington scientists answers that form-and-function question for one bacterium, the aquatic Caulobacter crescentus, whose cells are anchored to solid objects by conspicuous and distinctive stalks. "We've found the bacteria can take up nutrients with their stalks," said microbiologist Yves Brun, who led the study. "This is the first example that we know of in which a major feature of a bacterium's shape can be tied to a specific function." Despite their tiny size and readiness for laboratory study, far less is known about the physiological utility of bacterial shapes than, say, the streamlined forms of fish, sharks and dolphins, or the elongated spires of pine and redwood trees. Brun said C. crescentus' stalk acts as a sort of antenna that amplifies the uptake of organic phosphate from the surrounding environment. The narrow stalk adds little volume to the cell, and incoming nutrients diffuse toward the cell's main body, where nutrients are quickly assimilated by metabolic processes. Phosphate is an important molecule to all organisms. It is involved in DNA repair and duplication, the expression of DNA, the regulation of protein action, membrane synthesis and the transfer of energy within cells. The scientists used fluorescence microscopy to see where organic phosphate enters C. crescentus cells. As a gram negative bacterium, C. crescentus has two membranes -- an outer membrane and an inner membrane, with a space called the "periplasm" in between. Experiments demonstrated initial entry of organic phosphate across the entire cell surface, including the stalk. Once across the outer membrane, the organic phosphate is converted to inorganic phosphate and diffuses from the stalk toward the periplasm. When the phosphate reaches the cell body periplasm, the phosphate is taken across the inner membrane and into the central part (cytoplasm) of the cell. "The stalk essentially increases the cell's reach into the environment but without the cost of increasing the cell's volume and surface area, which would be expensive from an energetic standpoint," Brun said. Using mathematical models, the scientists showed that absorption of a nutrient using an antenna was a far more efficient morphology for nutrient uptake than alternate cell shapes in which the stalk plays no special role. The models assume the bacteria encounter nutrients via diffusion from their surrounding medium. "Our report makes the point that in calm aquatic environments where there is no mixing of the liquid and therefore the motion of nutrient molecules is dominated by diffusion, it is the cell's length that is the most important parameter for nutrient uptake," Brun said. "Imagine the nutrient molecule as a tiny tennis ball undergoing diffusion, that is bouncing back and forth off other molecules in random directions. It is easy to imagine that the tennis ball will be just as likely to make contact with a baseball bat as it will a tennis racket. And the longer the baseball bat, the larger the number of diffusing tennis balls that will make contact. That's why the stalk seems to be so advantageous for the cell. This is in contrast to cases where there is mixing of the liquid and where total surface area -- not length -- becomes more important. The stalk shape is advantageous in both situations because it increases surface area with minimal increase in volume, and at the same time it can be 15 or more times longer than the cell body." The implications of the group's discovery are two-fold, Brun said. If stalks improve the efficiency of the uptake of other nutrients, the structures and appropriate transport proteins could be added to bacteria commonly used in drug production and toxic spill clean-ups. Bacteria are often used as workhorses in the mass-conversion of one molecule to another. Improving the speed of uptake of a substrate molecule by the bacteria could hypothetically speed drug production. "If we could figure out how to get the bacteria used in bioremediation to make stalks, we could improve their ability to take up pollutants and up their efficiency," he said. But Brun also says the discovery has ecological significance. "Bacteria with stalks and other prostheses are ubiquitous in all the earth's aquatic environments," he said. "Phosphorus is a limiting nutrient in determining the productivity of lakes and oceans. The stalked bacteria are central players in scavenging phosphorus in oceans and lakes, and reintroducing it into the food chain." C. crescentus is an unusual bacterium whose lifespan encompasses two phases: a mobile "swarmer" phase, in which the cells have a single flagellum, and a sedentary "stalked" phase in which the cells shed their flagella, affix themselves to rocks or pebbles (or the sides of water pipes) with the help of a very sticky adhesive, and then grow a stalk. In April, Brun and colleagues from Brown University reported in the Proceedings of the National Academy of Sciences that the polysaccharide adhesive C. crescentus uses to affix itself to solid objects appears to be the strongest glue produced in the natural world. IU Bloomington biologists Jennifer Wagner, IUB physicist Sima Setayeshgar and IUB chemists James Reilly and Laura Sharon also contributed to the current PNAS report. Source: Indiana University
The common stereotype of Plains Indians sees them as horse-mounted buffalo hunters. The reality is, of course, that Plains Indians did not adopt the horse and its equestrian lifestyle until the eighteenth century. There were, however, bison hunting Indian peoples long before the arrival of the horse. On the grasslands of southern Saskatchewan and Alberta, Canada, archaeologists have found evidence of early bison hunters who specialized in bow hunting which has developed by 200 CE. Aboriginal people began to use the bow and arrow somewhat earlier than this, but they used it as a supplement to the atlatl. Archaeologists Ian Dyck and Richard Morlan, writing in the Handbook of North American Indians, report: “Avonlea people were the first to rely almost exclusively on bows and arrows.” With regard to the use of the bow and arrow, J. Roderick Vickers, writing in Plains Indians, A.D. 500-1500, reports: “It is hypothesized that this technology diffused from Asia, perhaps through the mountain interior of British Columbia.” Archaeologists consider Avonlea as a complex, which means that it is a group of artifact types which are found in a chronological sub-division. Avonlea does not, therefore, refer to a specific tribe. There are some who feel that Avonlea is ancestral to the Athapaskan peoples, including Chipewyan, Beaver, and Sarcee. While archaeologists have not uncovered any Avonlea bows, they have found arrowshafts and a distinctive arrow point. The Avonlea arrow points are small and thin, with tiny side notches. These points were originally found at a site in Saskatchewan and were thus named Avonlea after the site. The fine craftsmanship shown in the Avonlea projectile points suggests that strong social control was exercised in their production. J. Roderick Vickers writes: “Assuming that Avonlea competitive success was partly grounded in their innovative weapon system, there may have been magico-religious sanctions associated with production standards and use of the bow and arrow. That is, they may have been an attempt to prevent the spread of their weapon technology, at least in detail, to others.” Avonlea first appears along the Upper and Middle parts of the Saskatchewan River basin and during the next 200 years spreads into the Upper Missouri and Yellowstone River basins in present-day Montana. From its beginnings about 200 CE, Avonlea seems to reach its peak about 800 CE and by 1300 CE it has disappeared. With regard to subsistence, bison seem to have been a major factor in the Avonlea diet. Ian Dyck and Richard Morlan write: “Avonlea hunters were adept at communal hunting methods that allowed them to bring together and dispatch dozens of animals at one time.” Communal hunting included the use of pounds and jumps, such as the buffalo jump at Head-Smashed-In. One of the ways Indian people hunted buffalo was to drive them over a cliff. Scattered across the Northern Plains are thousands of these buffalo jump sites. Many of them were used only once, while others were used repeatedly. For the buffalo jump, several hundred people (sometimes more than a thousand) would come together. Archaeologist Jack Brink, in Imagining Head-Smashed-In: Aboriginal Buffalo Hunting on the Northern Plains, writes: “Not only were buffalo jumps an extraordinary amount of work; they were the culmination of thousands of years of shared and passed-on tribal knowledge of the environment, the lay of the land, and the behavior and biology of the buffalo.” The buffalo pound was a way of harvesting large numbers of bison in a similar fashion to the buffalo jump. However, the final kill location was not a cliff, but rather a pound or corral made of wood. Pounds were located in the lightly wooded areas that surround portions of the Great Plains. Here the hunters could find enough wood to build the pound. Using techniques similar to those used in the buffalo jump, the herd would be lured over many miles and then driven into the pound where they would be killed with bows and arrows and spears as they milled around. In addition to hunting bison, the data from the Avonlea archaeological sites show that they also hunted pronghorn antelope, deer, beaver, river otter, hares, and waterfowl. The Avonlea people also used weir fish traps during the spring spawning runs. Like the later horse-mounted Plains Indians, the Avonlea people used tipis. Archaeologists have found tipi rings associated with Avonlea material culture at several sites. The tipi coverings were held down with rocks and thus the archaeological remains are simply a ring of stones, sometimes with a hearth inside. For Indian people using dogs rather than horses to carry loads, the tipis were smaller than those used in the eighteenth and nineteenth centuries. The Avonlea people also made and used pottery. Several pottery types—net-impressed, parallel-grooved, and plain—have been found at Avonlea sites. Pottery vessels usually have a conoidal or “coconut” form. There are a number of hypotheses about what happened to the Avonlea people. One hypothesis sees them having migrated south where they would later emerge as the Navajo and Apache people. Another sees them as staying on the Northern Plains where they contributed to the Old Woman’s phase, which is associated with the North Piegan, Blood, and Gros Ventre. Some feel that Avonlea may have contributed to the Tobacco Plains phase of the Kootenay Valley of the Rocky Mountains. There is also speculation that they were involved in the formation of the village cultures of the Middle Missouri tradition. Ian Dyck and Richard Morlan write: “It is, of course, possible that the fate of the Avonlea culture took more than one twist.” J. Roderick Vickers summarizes Avonlea this way: “In the end, archaeologists must plead ignorance in understanding the appearance of Avonlea on the Northern Plains. It seems we can state that Avonlea is a culture of the western Saskatchewan River basin and that it expanded southward into central Montana, westward over the Rocky Mountains, and northeastward into the forest margins.”
Salt intake is affecting your immunity Table salt consists of sodium chloride. It supplies us with sodium, an important mineral that is essential for proper functioning of the human body. However, the American diet contains dangerously high amounts of sodium, almost 80 percent of which comes from processed and restaurant foods. The human diet, for millions of years, did not contain any added salt—only the sodium present in natural foods, which adds up to about 600–800 milligrams per day. The dietary intake of sodium in the United States today is about 3,500 milligrams per day. Excess dietary salt is most notorious for increasing blood pressure. Populations in pockets of the world that do not salt their food do not have elderly citizens with high blood pressure (also known as hypertension). Americans have a 90 percent lifetime probability of developing high blood pressure. So even if your blood pressure is normal now, if you continue to eat the typical American diet, you will be at risk. Elevated blood pressure accounts for 62 percent of strokes and 49 percent of coronary heart disease. Notably, the risk for heart attack and stroke begins climbing with systolic pressures (the first number in the blood pressure reading) above 115—considered “normal” by most standards. Even if you eat an otherwise healthy diet, and your arteries are free of plaque, hypertension late in life damages the delicate blood vessels of the brain, increasing the risk of hemorrhagic stroke. The American Heart Association, recognizing the significant risks of high blood pressure, has recently dropped their recommended maximum daily sodium intake from 2,300 milligrams to 1,500 milligrams. Salt has additional dangerous effects that are not related to blood pressure. In the 1990s, it was found that the relationship between salt intake and stroke mortality was stronger than the relationship between blood pressure and stroke mortality; this result suggests that salt may have deleterious effects on the cardiovascular system that are not related to blood pressure. Likewise, high blood pressure causes kidney disease, but dietary sodium has damaging effects on the kidneys beyond the indirect effects of high blood pressure. Further research has determined that long-term excess dietary sodium promotes excessive cell growth, leading to thickening of the vessel walls and altered production of structural proteins, leading to stiff blood vessels. In another study, higher sodium intake was associated with greater carotid artery wall thickness, an accurate predictor of future heart attacks and strokes—even in people without high blood pressure. High salt intake is also a risk factor for osteoporosis, because excess dietary sodium promotes urinary calcium loss, leading to calcium loss from bones (and therefore decreased bone density). Daily sodium intakes characteristic of Americans have been associated with increased bone loss at the hip, and sodium restriction reduces markers of bone breakdown. Even in the presence of a high-calcium diet, high salt intake results in net calcium loss from bone. Although postmenopausal women are most vulnerable to these calcium losses, high salt intake in young girls may prevent the attainment of peak bone mass during puberty, putting these girls at risk for osteoporosis later in life. Salt is also the strongest factor relating to stomach cancer. Sodium intake statistics from twenty-four countries have been significantly correlated to stomach cancer mortality rates. Additional studies have found positive correlations between salt consumption and gastric cancer incidence. A high-salt diet also increases growth of the ulcer-promoting bacteria (H. pylori) in the stomach, which is a risk factor for gastric cancer. Alarmingly, high sodium intake also correlates with death from all causes. Reducing dietary salt is not only important for those who already have elevated blood pressure; limiting added salt is essential for all of us to remain in good health. Since natural foods supply us with 600–800 milligrams of sodium a day, it is wise to limit any additional sodium, over and above what is in natural food, to just a few hundred milligrams. I recommend no more than 1,000 milligrams total of sodium per day. That means not more than 200–400 milligrams over and above what is found in your natural foods. It is also important to note that expensive and exotic sea salts are still salt. All salt originates from the sea—and so-called sea salts are still over 98 percent sodium chloride, contributing the same amount of sodium per teaspoon as regular salt. Sea salts may contain small amounts of trace minerals, but the amounts are insignificant compared to those in natural plant foods, and the excess sodium doesn’t magically become less harmful due to those minerals. A high-nutrient, vegetable-based diet with little or no added salt is ideal. Salt also deadens the taste buds. This means that if you avoid highly salted and processed foods, you will regain your ability to detect and enjoy the subtle flavors in natural foods and actually experience heightened pleasure from natural, unsalted foods. Since most salt comes from processed foods, avoiding added sodium isn’t difficult. Resist adding salt to foods, and purchase salt-free canned goods and soups. If you must salt your food, do so only after it is on the table and you are ready to eat it—it will taste saltier if the salt is right on the surface. Condiments such as ketchup, mustard, soy sauce, teriyaki sauce, and relish are all high in sodium. Use garlic, onion, fresh or dried herbs, spices, lemon or lime juice, or vinegar to flavor food. Experiment to find salt-free seasonings that you enjoy.
vegetable processingArticle Free Pass Putting foods into metal cans or glass jars is the major food-processing method of the world. It is particularly useful in developing countries where refrigeration is limited or nonexistent. In the canning process, vegetables are often cut into pieces, packed in cans, and put through severe heat treatment to ensure the destruction of bacteria spores. The containers are sealed while hot so as to create a vacuum inside when they are cooled to room temperature. Properly processed canned vegetables can be stored at room temperature for years. Minor defects of the process, however, will result in bulged cans after long periods of storage. For safety reasons, the contents of these cans should not be consumed. Although in most cases bulged cans are caused by the formation of gas from chemical reactions between the metal cans and their acidic contents, there is a remote possibility that inadequate heat processing did not destroy all bacteria spores. And, even though most heat-resistant spores are nonpathogenic, spores of Clostridium botulinum can survive underprocessing and produce deadly toxins that cause botulism. Unfortunately, because of the severe heat treatment, some canned vegetables can have inferior quality and less nutritive value than fresh and frozen products. The nutrient most susceptible to destruction in canning is vitamin C. For high-quality products, aseptic canning is practiced. Also known as high-temperature–short-time (HTST) processing, aseptic canning is a process whereby presterilized containers are filled with a sterilized and cooled product and sealed in a sterile atmosphere with a sterile cover. The process avoids the slow heat penetration inherent in the traditional in-container heating process, thus creating products of superior quality. The canning process can be illustrated by the example of green beans (Phaseolus vulgaris L.). After arrival at the processing plant, the beans are conveyed to size graders. Graders consist of revolving cylinders with slots of various diameters through which the beans fall onto conveyers. The conveyers carry them to snipping machines, where their tips and stems are cut off. The snipped beans then pass over inspection belts, where defective beans are removed. Smaller beans are canned as whole beans, while larger beans are cut crosswise by machine into various lengths. Some smaller beans are cut lengthwise and marketed as French-cut beans. Both the small whole and cut beans are blanched for 1 1/2 to 2 minutes in 82° C (150° F) water and mechanically packed in cans. The cans are then filled with hot water and dry salt or with brine, steam-exhausted for approximately five minutes, and sealed while hot or with steam flow. Depending on the size of the can, they are heat-processed for various periods of time—from 12 minutes at 120° C (250° F) to 36 minutes at 115° C (240° F). The cans are cooled to room temperature, labeled, and packaged for storage or immediate distribution. Frozen foods have outstanding quality and nutritive value. Indeed, some frozen vegetables, such as green peas and sweet corn, may be superior in flavour to fresh produce. The high quality of frozen foods is mainly due to the development of a technology known as the individually quick-frozen (IQF) method. IQF is a method that does not allow large ice crystals to form in vegetable cells. Also, since each piece is individually frozen, particles do not cohere, and the final product is not frozen into a solid block. Various freezing techniques are commonly used in the preservation of vegetables. These include blast freezing, plate freezing, belt-tunnel freezing, fluidized-bed freezing, cryogenic freezing, and dehydrofreezing. The choice of method depends on the quality of end product desired, the kind of vegetable to be frozen, capital limitations, and whether or not the products are to be stored as bulk or as individual retail packages. Most vegetables frozen commercially are intended for direct consumer use or for further processing into soups, prepared meals, or specialty items. Advances in packaging materials and techniques have led to bulk frozen products being stored in large retortable pouches. Many restaurants and institutions prefer bulk frozen soups packaged in these pouches because of their quality and convenience. One of the most important vegetable crops preserved by freezing is sweet corn (Zea mays L.). Both corn on the cob and cut corn are frozen. Sweet corn must be harvested while still young and tender and while the kernels are full of “milk.” After the ears are mechanically harvested, they are promptly hauled to the processing plant, where they are automatically dehusked and desilked. Probably more than any other vegetable, sweet corn loses its quality rapidly after harvest. Frozen corn maintains high quality by being processed within a few hours of picking. Corn on the cob is a particularly difficult vegetable to freeze. The dehusked and desilked ears are thoroughly washed and blanched in steam for 6 to 11 minutes and then promptly cooled. However, even an 11-minute blanch in steam does not completely inactivate all the enzymes in the cob portion. It is believed that the off-flavour frequently found in home-frozen corn on the cob comes from off-flavours produced in the cob that migrate out to the kernels. Blanched and cooled corn is quickly frozen by the fluidized-bed freezing process before packing. Blanched whole-kernel corn is produced either by blanching the corn on the cob before cutting; by partially blanching on the cob to set the milk, then cutting and blanching again; or by cutting before blanching. The “split” method of blanching twice produces the highest-quality product. After the corn is cut, impurities such as husk, silk, and imperfect kernels must be removed by either brine flotation or froth washing. In both methods the sound corn stays at the bottom while the impurities float off the tank. Whole-kernel corn can be frozen quickly using the individually quick-frozen method. Frozen corn can be packaged into polyethylene bags or cardboard cartons and labeled for retail, or it can be bulk-stored for further processing into components of value-added products such as frozen dinners. What made you want to look up vegetable processing?
Hesperian Health Guides Every day 20,000 people visit the HealthWiki for lifesaving health information. Ifwe could translate 50 more chapters. Make a gift to support this essential health information people depend on. HealthWiki > Helping Children Who Are Deaf > Chapter 2: Children who cannot hear well need help early > Learning language When you are surrounded by words, it is easy to learn the language that people in a community speak. Children learn language as they listen to people talk to each other and watch what happens, and as they talk to other people. Language becomes a way for them to understand their experiences and how the world around them works. Learning a spoken language is difficult for children who cannot hear When children cannot hear well, they will have difficulty understanding simple spoken words. And children need to understand many simple words to learn a language. When they know many words, they can learn more advanced communication skills, such as speaking in sentences or engaging in conversation. Children who are deaf or cannot hear well need help to learn skills like saying simple words or doing things that depend on simple communication, like taking turns. How language helps the mind develop Language enables children to think, to plan, to understand the world around them, and to be a part of a community. Without language, children cannot develop their minds. When children cannot hear, and do not get help learning a language to communicate, they will face problems in their mental development. Many parents with young deaf children or children with hearing loss are glad if the children learn a few simple words or gestures. But children need more than this. They need to learn a language. A deaf child needs to learn language early, so that she can use it to talk to herself, that is, to think. Expressing ideas in words makes it possible to think about those ideas. The bigger shirt must be Papa's. First I add the egg. Then I mix in flour until the dough is sticky. |Because she knows the words for bigger and smaller, Amina can learn how to compare sizes. Without language she cannot learn this.||Because she has words for doing things in order, Rosa can plan.| She also needs language to express her ideas to others, to tell people what she wants or needs. She needs language to understand explanations. Through communicating with others, she learns about the world around her. This helps her mind to develop and lets her relate to people. |Without language a child may not know why he must stay away from dangers. Dan cannot understand why they must keep the well covered.||Without a way for her mother to explain, Evi does not understand how her mother knows that someone is at the door.|
The latest news from academia, regulators research labs and other things of interest Posted: Apr 13, 2016 Electrons slide through the hourglass on surface of bizarre material (Nanowerk News) A team of researchers at Princeton University has predicted the existence of a new state of matter in which current flows only through a set of surface channels that resemble an hourglass. These channels are created through the action of a newly theorized particle, dubbed the "hourglass fermion," which arises due to a special property of the material. The tuning of this property can sequentially create and destroy the hourglass fermions, suggesting a range of potential applications such as efficient transistor switching. In an article published in the journal Nature this week, the researchers theorize the existence of these hourglass fermions in crystals made of potassium and mercury combined with either antimony, arsenic or bismuth. The crystals are insulators in their interiors and on their top and bottom surfaces, but perfect conductors on two of their sides where the fermions create hourglass-shaped channels that enable electrons to flow. The research was performed by Princeton University postdoctoral researcher Zhijun Wang and former graduate student Aris Alexandradinata, now a postdoctoral researcher at Yale University, working with Robert Cava, Princeton's Russell Wellman Moore Professor of Chemistry, and Associate Professor of Physics B. Andrei Bernevig. This is an illustration of the hourglass fermion predicted to lie on the surface of crystals of potassium mercury antimony. (Image: Bernevig, et al., Princeton University) The new hourglass fermion exists - theoretically for now, until detected experimentally - in a family of materials broadly called topological insulators, which were first observed experimentally in the mid-2000s and have since become one of the most active and interesting branches of quantum physics research. The bulk, or interior, acts as an insulator, which means it prohibits the travel of electrons, but the surface of the material is conducting, allowing electrons to travel through a set of channels created by particles known as Dirac fermions. Fermions are a family of subatomic particles that include electrons, protons and neutrons, but they also appear in nature in many lesser known forms such as the massless Dirac, Majorana and Weyl fermions. After years of searching for these particles in high-energy accelerators and other large-scale experiments, researchers found that they can detect these elusive fermions in table-top laboratory experiments on crystals. Over the past few years, researchers have used these "condensed matter" systems to first predict and then confirm the existence of Majorana and Weyl fermions in a wide array of materials. The next frontier in condensed matter physics is the discovery of particles that can exist in the so-called "material universe" inside crystals but not in the universe at large. Such particles come about due to the properties of the materials but cannot exist outside the crystal the way other subatomic particles do. Classifying and discovering all the possible particles that can exist in the material universe is just beginning. The work reported by the Princeton team lays the foundations of one of the most interesting of these systems, according to the researchers. In the current study, the researchers theorize that the laws of physics prohibit current from flowing in the crystal's bulk and top and bottom surfaces, but permit electron flow in completely different ways on the side surfaces through the hourglass-shaped channels. This type of channel, known more precisely as a dispersion, was completely unknown before. The researchers then asked whether this dispersion is a generic feature found in certain materials or just a fluke arising from a specific crystal model. It turned out to be no fluke. A long-standing collaboration with Cava, a material science expert, enabled Bernevig, Wang, and Alexandradinata to uncover more materials exhibiting this remarkable behavior. "Our hourglass fermion is curiously movable but unremovable," said Bernevig. "It is impossible to remove the hourglass channel from the surface of the crystal." This is an illustration of the complicated dispersion of the surface fermion arising from a background of mercury and bismuth atoms (blue and red). (Image: Bernevig, et al., Princeton University) This robust property arises from the intertwining of spatial symmetries, which are characteristics of the crystal structure, with the modern band theory of crystals, Bernevig explained. Spatial symmetries in crystals are distinguished by whether a crystal can be rotated or otherwise moved without altering its basic character. In a paper published in Physical Review X this week to coincide with the Nature paper, the team detailed the theory behind how the crystal structure leads to the existence of the hourglass fermion. "Our work demonstrates how this basic geometric property gives rise to a new topology in band insulators," Alexandradinata said. The hourglass is a robust consequence of spatial symmetries that translate the origin by a fraction of the lattice period, he explained. "Surface bands connect one hourglass to the next in an unbreakable zigzag pattern," he said. The team found esoteric connections between their system and high-level mathematics. Origin-translating symmetries, also called non-symmorphic symmetries, are described by a field of mathematics called cohomology, which classifies all the possible crystal symmetries in nature. For example, cohomology gives the answer to how many crystal types exist in three spatial dimensions: 230. "The hourglass theory is the first of its kind that describes time-reversal-symmetric crystals, and moreover, the crystals in our study are the first topological material class which relies on origin-translating symmetries," added Wang. In the cohomological perspective, there are 230 ways to combine origin-preserving symmetries with real-space translations, known as the "space groups." The theoretical framework to understand the crystals in the current study requires a cohomological description with momentum-space translations. Out of the 230 space groups in which materials can exist in nature, 157 are non-symmorphic, meaning they can potentially host interesting electronic behavior such as the hourglass fermion. "The exploration of the behavior of these interesting fermions, their mathematical description, and the materials where they can be observed, is poised to create an onslaught of activity in quantum, solid state and material physics," Cava said. "We are just at the beginning."
Signs and SymptomsHow Are You Feeling? Swine flu is a respiratory disease of pigs that doesn’t normally impact humans. However, it is contagious and is currently spreading from human to human. This typically occurs the same way as seasonal flu: by coming in contact with infected people who are coughing or sneezing. Signs & Symptoms The symptoms of swine flu in people are similar to the symptoms of regular human flu and include: - Sore throat - Body aches Some people have reported diarrhea and vomiting associated with swine flu. In the past, severe illness (pneumonia and respiratory failure) and deaths have been reported with swine flu infection in people. Like seasonal flu, swine flu may cause a worsening of underlying chronic medical conditions. Take this condition seriously, as swine flu varies from mild to severe. If you feel sick, see a doctor. You may need to limit your contact with others so you don’t infect them. And avoid spreading germs by: - Not touching your eyes, nose or mouth - Covering your nose and mouth with a tissue when you cough or sneeze (and then throwing that tissue out!) - Washing your hands often with soap and water, especially after coughing or sneezing, or using alcohol-based hand cleaners Emergency Warning Signs Seek emergency medical care if you become ill and experience any of the following warning signs: In children emergency warning signs that need urgent medical attention include: - Fast breathing or trouble breathing - Bluish skin color - Not drinking enough fluids - Not waking up or not interacting - Being so irritable that the child does not want to be held - Flu-like symptoms improve but then return with fever and worse cough - Fever with a rash In adults, emergency warning signs that need urgent medical attention include: - Difficulty breathing or shortness of breath - Pain or pressure in the chest or abdomen - Sudden dizziness - Severe or persistent vomiting
When NASA’s newest rover arrives on Mars Sunday night (Aug. 5), it will be carrying a host of state-of-the-art instruments, including the head-mounted, rock-zapping laser called ChemCam. The 1-ton Curiosity rover aims to determine if its landing site, the 96-mile-wide (154 kilometers) Gale Crater, can or ever could support microbial life. ChemCam will play a vital role in this quest by allowing the rolling robot to study the composition of rocks from afar. Mounted to Curiosity’s "head" just above its camera "eyes," ChemCam combines a powerful laser with a telescope and spectrometer that can analyze the light emitted by zapped materials, thereby determining the chemistry of Mars rocks with unprecedented precision. Using technology created at the U.S. Department of Energy's Los Alamos National Laboratory in New Mexico, ChemCam will focus a beam of infrared light onto a target from its French-built laser, vaporizing it with over a million watts of energy from up to 23 feet (7 meters) away. [Curiosity Armed with Laser, Cameras (Infographic)] ChemCam’s telescope will observe the process, while its spectrometer — which is sensitive to light from every element on the periodic table — will analyze emissions from the resulting plasma, telling scientists what lies within. "ChemCam is designed to look for lighter elements such as carbon, nitrogen and oxygen, all of which are crucial for life," said Los Alamos' Roger Wiens, ChemCam principal investigator. "The system can provide immediate, unambiguous detection of water from frost or other sources on the surface, as well as carbon — a basic building block of life as well as a possible byproduct of life. This makes the ChemCam a vital component of Curiosity’s mission." Because ChemCam uses a laser, Curiosity can examine many targets quickly, without having to drive right up to them. Even sitting still, the rover will be able to take up to a dozen measurements a day, scientists have said. And the dustiest rocks shouldn't pose much of a problem for ChemCam, which can clear away loose surface material with a zap or two. In addition to searching for the building blocks of life hidden inside rocks, ChemCam will serve as a scout for future explorers by helping identify the potential toxicity of Martian soil and dust. If astronauts ever land on Mars, they're going to get dusty in the process — possibly even more so than they did on the moon. It’s important to know if Mars’ dust contains anything dangerous like lead or arsenic, researchers say. ChemCam’s laser-induced breakdown spectroscopy (LIBS) technology has been used on Earth for environmental monitoring, seafloor studies and cancer detection. But Curiosity will be taking it to new heights by deploying the technology on the surface of another planet. - Mars Rover Curiosity: Mars Science Lab Coverage - NASA's Huge Mars Rover Curiosity: 11 Amazing Facts - Mars Rover Landing: NASA Set for Curiosity's Red Planet Arrival (Photos) Copyright 2012 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Rotavirus Infections in Cats The rotavirus is a double-stranded, wheel-shaped RNA virus which causes inflammation of the intestines and in severe cases, dysfunction in the intestinal walls. This virus is the leading cause of diarrhea and gastrointestinal upset in cats. And although it can be seen in cats at any age, kittens are more prone to rotavirus infections. Dogs are also susceptible to rotavirus infections. If you would like to learn more about how this disease affects dogs, please visit this page in the PetMD health library. Symptoms and Types The primary symptom of a rotavirus infection is mild to moderate watery diarrhea. In severe cases, cats may die from dehydration, extreme weight loss, and/or an unwillingness to eat. The rotavirus is typically transmitted through contact with contaminated fecal matter. Cats with underdeveloped or weak immune systems and those living in overly stressed environments are most at risk for the infection. Your veterinarian will try to rule out the following causes for intestinal inflammation before diagnosing rotavirus: feline parvovirus, feline leukemia virus (FeLV), feline coronavirus, feline astrovirus, and feline calicivirus. Other causes for inflammation of the intestine may include fungal infections, parasites, allergies, or exposure to toxins. Lab tests to detect the virus may include laboratory examination of tissue samples, or microscopic exploration of feces. One such test is ELISA (or enzyme-linked immunosorbent assay), a biochemical technique. Your veterinarian may also be able to identify the virus using a technique called virus isolation. To formally diagnose rotavirus, a veterinarian will examine the intestinal villi (the small hairs lining the intestine) and other cells within the intestinal wall, using special instruments to detect the rotavirus and antibodies the virus may have produced. Once the rotavirus is formally diagnosed, your veterinarian will begin treatment to ensure a prompt recovery. Treatment involves symptomatic relief to relieve the cat's diarrhea and to help replace lost fluids and electrolytes. Your doctor will also advise temporary dietary restrictions to help alleviate some of your cat's intestinal discomfort. Antibiotics are generally not prescribed because they are only useful for bacterial, not viral infections. Living and Management Because rotaviruses are zoonotic, it is important that pet owners keep infected cats away from young children, infants in particular. When handling the fecal matter of an infected animal, it is especially important to use precautions, such as wearing latex gloves and disinfecting the animal's living area. Humans living in developing countries are most at risk, often experiencing life-threatening diarrhea. Estimates suggest that in developing countries up to 500,000 children under age five die every year from rotavirus infections. The best protection for a kitten is to consume the milk of an immune cat queen, as they produce antibodies that may protect against the rotavirus. Image: Tyler Olson via Shutterstock
Hail is a byproduct of severe thunderstorms and tornadoes. It is produced when updrafts within a storm carry water droplets to a height in the sky where freezing occurs. These ice particles continue to grow as they may be dropped and picked up again, adding another layer of ice to the droplet. Eventually the hail becomes too heavy for the updraft to support and falls to the ground. With a diameter of 0.75 inch or greater, hail is considered to be severe. Hail occurs during severe thunderstorms and tornadoes so look and listen for warning information about these storms. Large hail is often observed immediately north of a tornado, but the presence of hail doesn’t always mean a tornado, and the absence of hail doesn’t mean there is no risk of tornadoes.
You can use the uname function to find out some information about the type of computer your program is running on. This function and the associated data type are declared in the header file As a bonus, uname also gives some information identifying the particular system your program is running on. This is the same information which you can get with functions targeted to this purpose described in utsname structure is used to hold information returned uname function. It has the following members: This is the name of the operating system in use. This is the current release level of the operating system implementation. This is the current version level within the release of the operating system. This is a description of the type of hardware that is in use. Some systems provide a mechanism to interrogate the kernel directly for this information. On systems without such a mechanism, the GNU C Library fills in this field based on the configuration name that was specified when building and installing the library. GNU uses a three-part name to describe a system configuration; the three parts are cpu, manufacturer and system-type, and they are separated with dashes. Any possible combination of three names is potentially meaningful, but most such combinations are meaningless in practice and even the meaningful ones are not necessarily supported by any particular GNU program. Since the value in machine is supposed to describe just the hardware, it consists of the first two parts of the configuration name: ‘cpu-manufacturer’. For example, it might be one of these: This is the host name of this particular computer. In the GNU C Library, the value is the same as that returned by see Host Identification. gethostname() is implemented with a call to uname(). This is the NIS or YP domain name. It is the same value returned by getdomainname; see Host Identification. This element is a relatively recent invention and use of it is not as portable as use of the rest of the structure. Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts. uname function fills in the structure pointed to by info with information about the operating system and host machine. A non-negative value indicates that the data was successfully stored. -1 as the value indicates an error. The only error possible is EFAULT, which we normally don’t mention as it is always a
Located at Layer 3 of the Open Systems Interconnection (OSI) communications model, the network layer's primary function is to move data into and through other networks. Network layer protocols accomplish this goal by packaging data with correct network address information, selecting the appropriate network routes and forwarding the packaged data up the stack to the transport layer (Layer 4). Existing protocols that generally map to the OSI network layer include the IP portion of the Transmission Control Protocol/Internet Protocol (TCP/IP) model -- both IPv4 and IPv6 -- as well as NetWare Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX). The routing information contained within a packet includes the source of the sending host and the eventual destination of the remote host. This information is contained within the network layer header that encapsulates network frames at the data link layer (Layer 2). The key difference -- and importance -- between transport information contained at Layer 2 when compared to transport information contained at the network layer is that the information can move beyond the local network to reach hosts in remote network locations. Functions of the network layer The primary function of the network layer is to permit different networks to be interconnected. It does this by forwarding packets to network routers, which rely on algorithms to determine the best paths for the data to travel. These paths are known as virtual circuits. The network layer relies on the Internet Control Message Protocol (ICMP) for error handling and diagnostics to ensure packets are sent correctly. Quality of service (QoS) is also available to permit certain traffic to be prioritized over other traffic. The network layer can support either connection-oriented or connectionless networks, but such a network can only be of one type and not both.
Fly vs. fly July 29, 1999 Researchers at the University of Chicago have discovered two offensive mechanisms male fruit flies use to ensure that more of their genes get passed on to the next generation--displacement and incapacitation of a previous male's sperm. In most insect species, the second male to copulate with a female sires most of her offspring. Scientists have long puzzled over this strange phenomenon, also seen in birds and some arachnids. Female fruit flies mate with multiple males, storing the sperm in three specialized storage organs (the long, tubular seminal receptacle and two mushroom-shaped spermatheace) and using it as needed to fertilize eggs. However, the odds of becoming a father aren't equal for all the males. The last fruit fly to mate with the female tends to sire the most offspring. "Not only are the flies in competition with each other to mate with a female, but their sperm are in competition to fertilize the eggs once inside the female," says Catherine Price, PhD, first author of the July 29, 1999 Nature paper. "This leads us to believe that the males have evolved mechanisms for encouraging females to use their sperm, and females may have evolved means of mediating competition between sperm from different males." Price and Jerry Coyne, PhD, professor of ecology and evolution at the University of Chicago and an author of the paper, concentrated on mechanisms the male uses for ensuring paternity, focusing on the apparent displacement and incapacitation of stored sperm by the ejaculate of later-mating males. The researchers obtained male fruit flies whose sperm had been labeled with green fluorescent protein (GFP), enabling them for the first time to distinguish first from second male sperm in the female's reproductive tract. When they mated females to males with the GFP-labeled sperm, and then to males without the label, they counted far less fluorescent sperm in the seminal receptacle than if the female hadn't been re-mated. "The first male's sperm seems to have been physically displaced here, but where it goes remains somewhat of a mystery," says Coyne. The displacement occurred shortly after mating, but only if the second male had viable sperm. Seminal fluid alone could not cause displacement of stored sperm. Coyne and Price also noticed that the number of stored first male sperm used to fertilize eggs dropped considerably after the female was re-mated. This "sperm incapacitation" effect became much stronger as more time passed between the first and second matings. When the second mating was allowed to take place two days after the first, the female produced less offspring using first male sperm because of its loss from the seminal receptacle. After seven days between matings, the loss of first male offspring is more severe, due to both incapacitation of first male sperm as well as displacement. The researchers were able to rule out the possibility that all the first male's sperm had been used up by using males with GFP-labeled sperm for the first mating. "There were just as many first male sperm stored as we would expect after seven days," says Coyne. The same incapacitation effect was observed even when the second male delivered seminal fluid alone. "As the sperm sit in the female's storage organs, they must undergo some change that makes them more susceptible to damage by something in the second male's seminal fluid." "Previous research has shown that a fly can incapacitate and displace his own sperm if he mates with the same female twice. So we know that second male sperm precedence does not rest on a genetic difference between the sperm of the first and second male," says Coyne. The evolutionary underpinnings of second male sperm precedence are still unclear, especially since the reproductive interests of the male and female are different. "It may be in the female's best interest to get rid of old sperm because it could get damaged if its stored for too long," says Price. But for the male, the truth is in the numbers. It is to his evolutionary advantage to fertilize as many eggs as possible and to prevent the fertilization of as many eggs as possible by his competitors. "The mystery is why the second male nearly always defeats the first male," Price says. Some species have evolved infinitely more elaborate means of ousting sperm from the female than the fruit fly. In rove beetles, the male deposits a sperm packet that expands inside the female like a balloon, literally pushing the first male's sperm out. The female then uses a specialized tooth to pop the packet and release the new sperm. In crickets, any remaining first male sperm gets eaten before the female is inseminated again. "Second male precedence is so common that we're really interested to learn that there are multiple complex mechanisms behind it, even within a single species," says Price. "By studying the balance between cooperation and antagonism between males and females, we may gain a greater understanding of what happens in a fertilized female." The University of Chicago Medicine 950 E. 61st Street, Third Floor Chicago, IL 60637 Phone (773) 702-0025 Fax (773) 702-3171
How the Earth became a snowball The break-up of ancient land masses plunged the Earth into a freezing white hell that lasted millions of years, U.S. and French researchers suggest. This created 'snowball Earth', where ice sheets covered continents and seas froze almost down to the equator, an event that occurred at least twice between 800 million and 550 million years ago. How these brutally protracted Ice Ages unfolded had always been a puzzle. Some scientists speculate that the Sun abruptly cooled for a while or that the Earth tilted on its axis or experienced an orbital blip that dramatically reduced solar warmth. But this latest research throws light on a little-explored theory: how tectonic wrenches that ripped apart the Earth's land surface provoked a runaway icehouse effect. At the time, the Earth's future continents formed a super-continent dubbed Rodinia, an entity so vast that rainfall, brought by winds from the oceans, failed to travel far inland. When Rodinia pulled apart, breaking up into smaller pieces that eventually formed today's continents, rainfall patterns changed dramatically. Rain tumbled over basalt rocks, freshly spewed from vast volcanic eruptions. That initiated a well-known reaction between water and calcium silicate, in which carbon dioxide (CO2) molecules were taken from the air and sequestered in calcium carbonate, which was then washed down to the seas. But the computer model published today suggested that the sucking of the greenhouse gas CO2 from the air led to a catastrophic cooling. This is the opposite of the greenhouse effect, where rising CO2 levels have been blamed for global warming. According to the simulation, before Rodinia broke up about 800 million years ago, CO2 concentrations were about 1830 parts per million; and the mean global temperature was 10.8°C. Fast-forward to Rodinia's break-up, 50 million years later, and the picture was greatly different. CO2 levels are at 510 parts per million and the planet's mean temperature was a frigid 2°C. "Tectonic changes could have triggered a progressive transition from a 'greenhouse' to an 'icehouse' climate during the neo-Proterozoic Era," the authors said. Combine this with the rock and rainfall reaction, and the simulation "results in a snowball glaciation".
Spring is known for its strong storm systems that can create violent twisters. However, it’s not the only season known for tornadoes. Autumn is considered the “second” tornado season. “The second half of October and especially November can often be a second season for tornadoes and severe thunderstorms,” said tornado expert Dr. Greg Forbes. “In many ways, this is the counterpart to spring, when strong fronts and upper-air systems march across the United States. When enough warm, moist air accompanies these weather systems, the unstable conditions yield severe thunderstorms and sometimes tornadoes.” May is still the peak month for tornadoes. Up to 52 percent of September’s tornado outbreaks are due to landfalling tropical storms and hurricanes. October and November’s tornadoes are caused by strong cold fronts and low pressure systems affecting the South and sometimes the Midwest.
This activity engages learners in exploring the impact of climate change on arctic sea ice in the Bering Sea. They graph and analyze sea ice extent data, conduct a lab on thermal expansion of water, and then observe how a scientist collects long-term data on a bird population. Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials
Here is a range of short activity ideas for the topic Changing Perceptions. Work with students to generate class woking definitions of the words ‘stereotype’, ‘prejudice’ and ‘discrimination’. These words are often used interchangeably although they have very distinct definitions. They have a clear progressive link, and so by understanding the meanings of the words, students can come to better understand the concepts. Our suggested definitions: Stereotypes – these are beliefs held about a group of people or type of person. A stereotype states that all people who belong to a certain group are the same, they think, dress, act, and talk the same way. Prejudice – this is the action of ‘pre-judging’ someone; forming an opinion about someone or a group of people that is not based on reason or actual experience Discrimination – treating someone differently, usually worse, because of who they are Discrimination or a Fair Decision?: In 2005, a shopping centre in England made a bold move which caused outrage from many people who said it was a form of discrimination. Ask students to read the newspaper article below and ask them to discuss if they think if their decision was discrimination or a fair decision? (n.b. a ‘hoodie’ is a large jumper with a hood) Shopping Mall bans ‘Hoodies’ Hoodies and baseball caps have been banned at a shopping centre in Kent, in an attempt to tackle anti-social behaviour. The move was supported by local police who said that it would reduce intimidating conduct in the shopping area.Action was taken by the shopping mall after a series of anti-social incidents involving youths, occurring primarily in the evenings and weekends. The centre management claims that youths who wear baseball caps and hoodies create an intimidating environment in the shopping centre, which is driving customers away. As these items of clothing obscure the perpetrators faces, other guests at the retail centre feel uncomfortable. The shopping centre also stated that by wearing hoodies and baseball caps, CCTV networks were rendered ineffective as faces could not be registered. Image courtesy of cjc4454/Flickr People who have Changed Perceptions: Over time, as human rights and equality have become increasingly important, people have challenged the stereotypes which they have encountered. Give students fact files on a few prominent people who have done this. When they have read through the information below, choose one of the people to research in more detail, or someone else who has challenged perceptions. It might be someone who is very famous in your country. Make an in-depth profile about them to present to the class. Why not do it in the style of a newspaper article or comic strip? Which other people do you know of who have challenged perceptions? Who: Martin Luther King Jr When: 1950s and 1960s What: Campaigned for racial equality in the USA Who: Germaine Greer When: 1970s- present What: Key role in modern feminism to change the perception of women in modern society Who: Daniel Witthaus What: Challenges homophobia and changes perceptions of homosexuality Challenging Perceptions in Films: Ask students to think about which films support stereotypes and which portray characters which do not follow stereotypes and therefore try to work against them. Are we influenced by what we see in films? Perceptions on Young People: Show this video to your class. It has been said that in some countries in recent generations the relationship between the younger and older generations is deteriorating and young people are seen principally as a nusiance. “Young people are like planes, you only hear about them once they crash” Members of the UK Youth Parliament made a video to challenge the negative stereotypes of young people which feature in the media. Watch the video and share your opinions with the class. (Made by UNICEF uniceftagd/YouTube)
For this lesson we are going to fill in a couple of concepts that we will need before we go further with directories. Wildcards are characters that can be used to stand-in for unknown characters in file names. In card games, a wildcard is a card that can match up with any other cards. In DOS, wildcard characters can match up with any character that is allowable in a file name. There are two wildcards in DOS: * = matches up with any combination of allowable characters ? = matches up with any single allowable character Of course, since these two characters are used for wildcards, they are not allowable in filenames themselves. A filename like myfile?.txt would not be allowed. If you tried to create a file with this name you would get an error message “Bad file name.” But wildcards are very useful in any DOS command which uses a filename as an argument (which is most DOS commands, come to think of it.) The asterisk character, *, can stand in for any number of characters. Some examples of this command: This command would delete every file with the doc extension from the root directory of C: . So files like myfile.doc, testfile.doc, and 123.doc would all be deleted. C:\>copy ab*.txt a: This command would copy every file that began with ab, and had an extension of txt, to the floppy drive A: . So files like abstract.txt, abalone.txt, and abba.txt would all be copied. This is the fastest way to clean out a directory. This command will delete every file in the directory C:\temp\. The first apostrophe covers every filename, and the second one covers every extension. The question mark wildcard, ?, stands in for any single character. Some examples of this command: This command would only delete files that had a single character filename and a doc extension from the root directory. So a file like a.doc or 1.doc is history, but a file like io.doc is perfectly safe, since it has two characters. C:\>copy ab?.txt a: This command would copy any file with a three-letter name, of which the first two letters were ab, and with a txt extension, to the floppy drive A: . So files like abz.txt and ab2.txt would You can combine these wildcards in any command as well. This command would be very selective. It would look in the temp directory for files that had anywhere from 1 to 5 beginning characters, followed by ab followed by one character, and which had an extension of do followed by any one character. It would then delete any such files as matched. Examples of matching files would be itab3.dox, mearabt.doq, and 123abc.doc. But the file allabon.doc would not be deleted because it does not match. It has two characters following the letters ab in the filename, and the command specified one character in that position. Every file in DOS has four attributes. These are: As we saw in lesson 9, each file has an entry in the directory, and in that entry there are four bits, one each for the four attributes. These attributes are turned on if the bit is set to 1, and turned off if it is set to 0. The Read-only attribute, if it is set to on, will let you read the contents of a file, but you cannot modify it in any way. If you turn the read-only attribute off, you can modify, delte, or move the file. The Archive bit is set on when a file is first created, and then set off when the file has been backed-up by a good backup software program. If the file is ever modified, the archive bit is turned back on. That way, the software that does the backup can look at the archive bit on each file to determine if it needs to be backed up. The System attribute is used to mark a file as a system file. In earlier versions of DOS, files marked “system” were completely off-limits without specialized utilities, but now the attribute serves mostly as a warning. The Hidden attribute is used to prevent a file from being seen by other commands. If you try to clean out a subdirectory (such as by using the DEL *.* command), then try and remove the subdirectory, and then get an error that the subdirectory is not empty, you have a hidden file in there that was not deleted, even with You can view the attributes for any file by using the DOS command ATTRIB. If you run the command without any arguments, you will get a listing of all the attributes that are turned on for every file in the subdirectory: This will give you a list of the files in the C:\temp\ subdirectory, for every attribute that is turned on you will see a letter (A for archive, S for system, H for hidden, and R for read-only) at the beginning of the line. You can also look at the attributes for any one file by including that filename (with an optional path) as an argument in the command: And you can change the attributes for any file by making the following arguments in the command: - +r = makes a file read-only - -r = removes the read-only status, makes a file editable again - +a = turns on the archive bit (i.e. flags this file as not having been backed - -a = turns off the archive bit (i.e. shows this files as having been backed - +s = marks the file as a system file - -s = removes the system file designation - +h = makes the file “hidden” to other commands - -h = reveals the file to other commands C:\temp\> attrib -h hidfile.txt The file hidfile.txt will now be visible to other DOS commands. You can chain these together if you wish: C:\temp\>attrib -h -r myfile.txt This will both reveal the file myfile.txt and make it editable and deletable. With the two concepts of wildcards and attributes, we are ready for the next lesson, which will make us experts in using the DIR
Presentation on theme: "Cahuilla (kuh-Wee-uh) Bryant. Where They Lived The Cahuilla lived in southern California. Their homelands were located in present- day Riverside and San."— Presentation transcript: Where They Lived The Cahuilla lived in southern California. Their homelands were located in present- day Riverside and San Diego counties. There were many different landforms in Cahuilla territory. Deserts and forests were also found throughout the area. Society The Cahuilla lived in villages. Each village had a leader called a net. The net acted as the ceremonial leader, the economic ruler, and the problem solver for the village. He also arranged hunting and trading trips. The net had a helper called a paxaa. Society The paxaa helped the net a lot. Cahuilla society also had special speakers and singers. Doctors were usually older women. Food The Cahuilla ate plants and animals found in the environment. They planted foods such as corn, beans, squashes, and melons. The Cahuilla gathered wild plants, fruits, beans, nuts, berries, and seeds. Acorns were a seasonal food that they collected from a oak tree.