content
stringlengths
275
370k
Note: This tutorial was transfered from my old website, and it's now updated with more information Before we start, we need to define several stuff: - PathTool (aka PenTool) - A tool use to create paths. - Point (aka Anchor) - self explanatory :D - Line (aka Segment) - a connection between 2 points (not more and not less then 2) - Control (aka Handle) - a special kind of point that controls the curve of a Line. From each point you have 2 controls - one for the line that ends at the point and one for the line that starts at the point. - Stroke - one or more Points (Anchors) which are conected to each other. A closed stroke is a stroke where the first and last points are connected, and an Open Stroke is a stroke where the first and last points are not connected. - Path - a path is a collection of one or more strokes |An illustration showing the different parts of a path. | There are 3 strokes in this illustration: (1) a classical curved stroke, (2) A polygonal stroke and (3) an open curved stroke PathTool Tutorial - Basics |The tool options tab showing the options| for the path tool Here is a basic demonstartion of using the Pentool/PathTool: |An illustration showing how a segment between two| points bends, depending on the positions of it's handles Now we need to explain 2 more things: - An Active Point is the last point (anchor) that you clicked on. A point can also become active if you drag a line that it's connected to (see explanations in the illustration below) - A control (handle) becomes visible only if the point it's part of is active. By controling the handles you can have up to 2 curves per line so if you want more you will need more dots. In the animation above I clicked on the line and dragged, once next to each point. Excercise - Drawing a tip of a brushNow let's try to be more practical: let's create something! |An illustration showing what| you should do Create the first two points by clicking on the canvas (while using the path tool in design mode) and then bend the line between them as showed above (by clicking on it and dragging). To curve the path like this I first clicked next to the bottom point and dragged to the right and then clicked next to the top point and dragged to the left. Now create the third point by cliking the canvas in the position showed above. What happened?! why didn't the new point Connect to the old one? The answer is simple: GIMP connects the new point only when the active point before you created the new one, was at the end of the path (when a point is selected and becomes active it will look empty and a frame will appear in the place you should have a filled circle). When dragging the segment between two points, both of them become active and so GIMP doesn't know which point to connect to the new point. In order to reconect this point, make sure it's active (if not, click on it), now Ctrl+Click on one of the dots at the end to connect it. Most of the times you won't need to do manual connections since the points will be connected automatically. This only happens if you have more then one active point or when the active point is not at the end of a path. Later on in this tutorial, we'll see how to bend line segments without going through this trouble. We will now delete the newly created point (it was created for education purpose only, to show you the above bug and how to connect points) by clicking the backspace or delete buttons on the keyboard. Now connect our second dot to the first one. Now bend the new line segment as shown below (drag the line first to make the handles emerge, and the fine tune the curve by moving the handles) |Finishing the tip of the brush| - Design - Deafult - Edit - Ctrl - Move (Stroke) - Alt - Deleting - Holding Shift on Edit (Ctrl) mode and clicking on it. See the comment at the end of the tutorial about straighting curved strokes. - Moving - Clicking on it and dragging in Design mode. - Adding at end - Design mode, active point should be open at one side - Adding at the middle of segment - Clicking on a segment in Edit (Ctrl) mode - Deleting - Holding Shift on Edit (Ctrl) mode and clicking on it - Moving - Clicking on it in and dragging in Design mode - Connecting - Clicking on a point in Edit (Ctrl) mode while both active point and the point you are clicking on are unconnected at 1 or 2 sides - Deleting - Holding Shift on Edit (Ctrl) mode and clicking on it - Moving (When Visible) - Clicking on it in and dragging in Design mode - Moving (When Invisible) - Clicking on it's Point (Anchor) and dragging in Edit (Ctrl) mode - New - Clicking anywhere in design mode when the active point is connected on both sides or when there is more then one active point or Clicking Shift while in Design mode and then clicking anywhere you want. - Move - Clicking on a stroke and then dragging while in Move (Alt) mode. PathTool Tutorial - Selections and Stroking Selection - a selection from a path is basically the area inside the path. When the path is Open, it's treated like there is a straight line from the starting point to the end point. When we have an area covered by 2 paths (or any other even number) it won't be selected. Imagine a ring in real life, the edges of the ring are 2 circles when one is inside another and the area covered by both rings is the hole. When we have an area covered by 1, 3 (or any other odd number) it will be selected. When we have an area covered only by one path it will be selected since this is how we defined the selection. When an area is covered by 2 (or any other even number) paths it will be a hole so when we have 3 (or any other odd number) it will be full again since we have 1 path inside a hole (the 2 paths cancel each other so we we have nothing in this area except for the one path and then this is the selection) |An illustration showing the selection created by different paths| Stroking - stroking a path (to stroke a path) means to draw along it's lines. In order to do this simply click on the path you want to stroke inside the paths dialog (To open it, go in the menu to Windows→Dockable Dialogs→Paths) to make it the active path, and then go to the Edit→Stroke Path... In the dialog that will pop you can select how to stroke the path, which tool to use it and you can set many other options - try playing with it. |An Illustration showing some example of what can be achieved by different configurations for stroking the path| Important Tips/Tricks for easier usage Drawing Curved Paths It's very annoying to have to draw a polygonal path and only then to curve it. In fact, you can draw a curved path right away! This requires a bit of practice but this is how it works: When you create a new point in design mode, instead of clicking and releasing the mouse right away, click and drag. The handle will move to where you drag your mouse and this way you can create curved paths directly when drawing them! A path looks smooth when it passes through a point, if both of it's handles look as if they are on one straight line. |An illustration showing when a path is smooth| Note that the two handles don't have to be symmetrical (when the point is the center of the symmetry) in order for the point to look smooth. In order to make both handles symmetrical, while dragging one handle hold the Shift button and the other handle will become symmetrical! Straightening line segments If you want to draw a polygonal stroke, you can simply toggle the checkbox labled "Polygonal" in the tool options tab (see screenshot above). But, if you want to straighten one line in a curved stroke it's a bit harder. A staright line segment is produced when the handle coming out from each of it's points, is exactly as the same location as the point - to achieve this one can simply "Delete" the handles (see keyboard shortcut above) of the two points of this line segment.
Bird Brains and Spatial Degeneration We know that all species that move need to orient themselves within their surroundings before they can find their way. But surprisingly, we know little about how this happens. Studies have shown that features like trees or buildings—and geometry, as in distance and direction—play fundamental roles. In fact, all species studied to date have shown what researchers call “an obligatory encoding of geometry.” But does the type or amount of geometry encoded by the brain change over a lifespan? Research has shown that aging affects how the left and right sides of the brain work to complete tasks. As people age, their cognitive abilities decline, including their ability to orient themselves when moving around. Yet most research on this issue is limited to studies of stationary participants. As Canada Research Chair in Comparative Cognition, Dr. Debbie Kelly is examining the effects of aging on “hemispheric asymmetries” (relative functional differences between the left and right sides of the brain) using tasks we do every day. To do this, she is studying brain function in birds. Birds’ brains work similarly to those of humans in that they use different hemispheres to perform different tasks. In fact, it is easier to examine the two brain hemispheres independently of each other in birds than in humans. Kelly will learn how aging and ecology influence how birds encode and weigh spatial cues, and how the two brain hemispheres process that information. Ultimately, her research will combine the data she obtains from humans and animals to better understand spatial cognition, and may lead to developing a useful tool for the early detection of spatial degeneration disorders, such as Alzheimer’s disease.
Learning about fractions isn’t always easy, but who says it can’t be fun? One whole cow, what should she do? Moo while her friends paint one half blue! Prompted by a poem and a visual clue, students are asked to answer what fraction is illustrated in the cow’s antics, starting with halves and progressing into thirds, fourths, fifths, eighths, and tenths. An illustrated answer key is provided in the back of the book to help the child check their work. What fraction of the cow is blue? Answer: ½ What fraction of the cow is white? Answer: ½ With the math problem featured as part of the artwork, students get an immediate sense of how to apply and understand the concept of fractions. How moo-velous!
The Papahānaumokuākea Marine National Monument encompasses a vast area larger than all U.S. national parks combined. We came here to map the seafloor around the islands, atolls, reefs, and seamounts that compose the Northwestern Hawaiian Island chain within the monument. Ultimately, we want to gain a better understanding of the geological processes that helped shape this part of the world. As the ship cruises along at around 10 knots, we operate three different data collection systems. First, we have Falkor’s multibeam sonar that pings the seafloor with sound to give us a picture of what the ocean bottom looks like. Then, we have a gravimeter, which detects minute gravity variations that tell us what is beneath the seafloor surface. Finally, our magnetometer measures the Earth’s magnetic field along our path and provides information on the relative ages of seafloor features. On this cruise, we are using a Geometrics G-882 magnetometer provided by the University of Hawai‘i. It’s towed about 170 meters behind the ship in order to avoid magnetic interference from the metal vessel itself. The torpedo-shaped instrument glides along about 10 meters beneath the surface, logging the magnetic field intensity every tenth of a second. We can then subtract the Earth’s background magnetic field to create a map of local fluctuations, which we call a magnetic anomaly map which gives us an idea of the relative ages of different portions of the seafloor. How It Works When oceanic crust forms in the fiery furnace of a volcanic mid-ocean ridge, its rocks contain tiny iron atoms that align themselves with the local magnetic field. As the rock cools to a solid, these atoms freeze in place, preserving a record of the direction and intensity of the magnetic field at that time. That field tends to fade over time, so the intensity of the signals we measure is one indicator of age. But, as it happens, the Earth’s outer core is a dynamic place, and this causes the north and south magnet poles to reverse every few hundred thousand years. That history is now well established, so the direction of the magnetic field we detect at a given spot gives us another age indicator. When Navy ships combed the oceans with magnetometers in the 1950s looking for ways to detect submarines, they unexpectedly discovered alternating bands of magnetizations in the seafloor, which was one of the strongest pieces of evidence that gave birth to the theory of plate tectonics. Reading the Hawaiian Islands Now that we know the plates move, we think we have a good idea how the Hawaiian Island chain formed. As described in our previous post about Midway Island, motion of the Pacific Plate over the Hawaiian hotspot results in a chain of islands that progressively sink to a fate as submarine mountains called seamounts. These Hawaiian seamounts coexist with other, older seamounts that pepper the bottom of the ocean. How can we tell them apart? Remember that when volcanic rocks solidify, they lock in the magnetic field of the time and place where they form. That means that we can tell at what latitude rocks formed based on their magnetic signature. Plus, the minerals within the rock alter over time, and the little iron atoms lose their alignment. So, as mentioned, the intensity of magnetization within the rock decreases as a result. Thus, we would expect to see lower magnetic anomalies with our magnetometer when we pass over older, non-Hawaiian seamounts compared with the younger Hawaiian ones. The seafloor in this area is Cretaceous in age about 80-100 million years old, so we’re trying to use magnetic data to discriminate between Cretaceous vs. Hawaiian seamounts, which are only about 5-45 million years old. As a geophysicist with mostly a seismology background, I am excited by this hands-on opportunity to learn about marine magnetics. I hope to fold this into my dissertation involving other geophysical exploration methods on the Earth and planets. Mahalo nui loa to everyone on the R/V Falkor team for a superb cruise so far.
Teaching techniques for science teachers they also make it fun to teach scientific concepts and help students understand common some of the methods. Teaching english as a foreign language is challenging, yet rewarding career path to avoid some of these challenges, here are 10 common problems that teachers face in the classroom, and. Part ii overview of qualitative methods common qualitative methods including some where faculty members who participated in training were teaching and. 5 common techniques for helping struggling students teachers use various methods to meet the needs of all here are five common teaching methods 1. Teaching methods in common use, such as the lecture method in other methods of teaching such as demonstration- performance or guided discussion. Common core teaching and learning strategies when implementing common core the suggestions included in this document combine familiar methods and. 1 1 effective teaching methods at higher education level dr shahida sajjad assistant professor department of special education university of karachi. Teaching methods vary between instructors and will have different effects on different students on an individual basis. Teaching techniques suggested methods in teaching through total physical response i orientation to introduce and motivate the class you might: have a translator briefly explain the theory. Effective teachers are always on the prowl for new and exciting teaching strategies that will keep their students motivated and engaged here are a few that have been a staple in most. Take a look at the four of the most time-tested methods for teaching children music: orff, kodaly, suzuki, and dalcroze. Overview of english language teaching methods and theories a review of the best methods and learning techniques, including the communicative and modern methodologies. The common core is today’s new math confused and frustrated by these “new” ways of teaching to instruct their students in the new methods. Advice and information for parents of esl students on the topic of: language teaching methods. Teaching methods lecture-explaining resource people case study group discussion brainstorming & buzz instructional planning author: robert torres last modified by. What are the different teaching methods direct teaching method : this is the most common and widely accepted teaching methodthis works wonders in case of children in school and. The common core state standards initiative is widely denounced for imposing confusing, unhelpful experimental teaching methods following these methods, some have created problems that lack. The effective teacher will have developed a repertoire of evidence-based instructional strategies that can teaching methods and can be is common practice. Language pedagogy [definition needed] these teaching methods stress step progression based on question-and-answer sessions which begin with naming common. 5 highly effective teaching practices by rebecca alber february 27, 2015 i remember how, as a new teacher, i would attend a professional development and feel. Teaching methods - teaching methods - lecture, class discussion, small group discussion, videotapes strengths and limitations of different teaching methods. The most common type of collaborative method of teaching newer teaching methods may a popular teaching method that is being used by a vast majority. Foreign language teaching methods: some issues and new moves fernando cerezal sierra the different methods analysed in this section share a common. Second and foreign language teaching methods the natural approach and the communicative approach share a common theoretical and philosophical basethe natural. Tpr 6 most popular esl teaching methods the trend has been toward combining different methods and approaches what do kids and grammar have in common. Informal instruction, inquiry based learning, and cooperative learning are all common teaching methods the best teaching methods. All Rights Saved.
"I Wish I Had Blubber" Habitat Video: Florida Bay - Grade Level: - Third Grade-Seventh Grade - Aquatic Studies, Biology: Animals, Marine Biology - 45 mins - Group Size: - Up to 36 - indoors or outdoors - National/State Standards: - Next Generation Sunshine State Standards: SC.D2.2, S.C.D.1.2., SS.B.2.2, LA.A.2.2 - manatee, blubber, Survival OverviewStudents will be able to explain why manatees can not survive in cold water. Students will be able to explain the dangers of boat propellers to the Florida Bay sea grass habitat, and to the manatees. Students will describe four characteristics of the manatee. Florida Bay is home to the manatee. Manatees are an endangered species. "Deadly Waters" is a mystery novel that takes place in the Florida Bay. The book's plot is about manatees that are dying without an obvious explanation. Possible explanations: When water temperature falls below 68 degrees, the water can become deadly to the manatee because of their lack of insulating blubber. Boat propellers also pose a threat to the manatee. Boat propellers also destroy sea grasses of the Florida Bay. Sea grass, such as manatee sea grass, is a vital food source for the manatee. Watch the "Florida Bay" episode
Ex situ preservation involves the conservation of plants or animals in a situation removed from their normal habitat. It is used to refer to the collection and freezing in liquid nitrogen of animal genetic resources in the form of living semen, ova or embryos. It may also be the preservation of DNA segments in frozen blood or other tissues. Finally it may refer to captive breeding of wild plants or animals in zoos or other situations far removed from their indigenous environment. In situ conservation is the maintenance of live populations of animals in their adaptive environment or as close to it as is practically possible. For domestic species the conservation of live animals is normally taken to be synonymous with in situ conservation. Ex situ and in situ conservation are not mutually exclusive. Frozen animal genetic resources or captive live zoo populations can play an important role in the support of in situ programmes. The relative advantages and disadvantages of the major systems are therefore reviewed here with a view to identifying the relative strengths and areas of mutual support. |Ex Situ||In Situ| |i.||COST - initial set up cost||rel high||low-high| - maintenance cost |ii.||GENETIC DRIFT - initial||rel high||low| |iii.||Applied to all species||no||yes| |vi.||International access||good||not good| |ix.||Selection for use||none||good| The relative cost of collecting, freezing and storing frozen material, as compared to maintaining large scale live populations, has been estimated to be very low (Smith, 1983). In particular, once the material has been collected, the cost of maintaining a cryogenic store is minimal. Such banks require little space and few trained technicians. A very large number of frozen animals from a large number of populations can be stored in a single facility. Cryogenically preserved populations suffer no genetic loss due to selection or drift. The method places a sample in suspended animation and that sample remains genetically identical from the time of collection to the time of use. (The effects of long term radiation are considered to be negligible.) Frozen animal genetic resources can be made available to livestock breeding and research programmes throughout the world. The principal disadvantages of ex situ, or cryogenic preservation lie in the availability of the necessary technology and access to the frozen populations. Cryogenic stores are not expensive to run but they do have annual capital maintenance requirements. In particular they require a guaranteed supply of liquid nitrogen which must be imported into many countries with expensive foreign currency or aid. Cryogenic stores have no intrinsic value with respect to financial income unless material can be sold for research and development. They do not produce food or other agricultural commodities and might therefore be deemed to be expensive luxuries in periods of financial austerity. Cryogenic storage is ideal for the preservation of defined ‘genes’ or recognized characteristics. Quite small samples ensure the inclusion of all but the rarest genes (see Table 7, section 4.3.1). However, the cryogenic method is less effective in the conservation of ‘breeds’ where the relative frequency of genes is important. The methods of initially sampling and collecting genetic material from a limited number of animals to be incorporated into cryogenic storage can result in an initial genetic drift. Thus there is a shift in gene frequencies between the original population and the cryogenically conserved sample population. The technology necessary for semen collection and freezing, and for superovulation, ova and embryo flushing and freezing is readily transferred throughout the world, however, it is expensive for countries in which the technology is not yet established. The technologies are not yet developed for all species, viable pig and poultry embryos, for example, cannot currently be successfully thawed after freezing. There are also instances where the technologies may be developed but the livestock themselves are not accessible for semen or embryo collection, for reasons of politics, ownership or their remote location. There is a potential danger in cryogenic storage, from large scale loss of material due to serious accidents. This could be due to human error, power failure, loss of liquid nitrogen, fire, flood, storm, earthquake or war. Such risks can be reduced by keeping duplicate stores in different regions but remains a serious concern. Linked to the danger of global loss of cryogenic material due to accident, is the danger that regions or nations might lose access to the material. This could be due to their failure to develop or maintain the technological ability to access the frozen stores. There is also the fear of political change which might affect the rights of access to global or regional banks. Cryogenically preserved populations cannot be studied characterized or monitored. They are not easily available for comparative trials or for research projects. It takes a number of years to regenerate a cryogenically preserved population to review or re-evaluate it in changed circumstances, or to utilize it as a breeding population. Cryogenically preserved populations are not able to adapt through gradual selection, to changes in the climate or disease background of the local or global environment. Finally animal disease control in the future could make the use of frozen material laid down in a relatively disease prevalent period, too dangerous to use. This is already a problem with European semen banks which have been collected under a number of different health regulations and testing regimes over the past fifty years. The majority of these stores will be deemed unsuitable for use in Europe under new European Community animal health directives to be implemented in 1992 (National Cattle Breeders Association, 1991). The major advantages for in situ conservation relate to the availability of technologies and the utilization of the breeds. The in situ conservation of live populations requires no advanced technology. There are optimal sampling strategies (see section 4.3.1) and breeding strategies (see section 4.4), but the basic needs of an in situ programme are already available and affordable throghout the world. The farmers of every region and nation know how to manage and maintain their local strains. They already have the capability, all they require is direction. In situ projects can ensure that financial commitment to the conservation of animal genetic resources involves helping to improve the livelihood of farming communities associated with the breeds targeted for conservation. Live conservation projects involve animal utilization and are net producers of food, fibre and draught power (see table 6). They do not require the importation of expensive materials, skills or equipment. Live conservation programmes may survive major political or environmental upheaval, wars, or climatic disasters that could eliminate frozen stores, especially those needing imported frozen nitrogen. Sufficient numbers of breeding units must be established and maintained, however, for each conserved population. In situ projects enable breeds to be properly characterized and evaluated in their own and related localities. They allow for comparative trials, research and crossing experiments. This method of conservation also allows populations to adapt to changing environmental conditions and endemic diseases. The maintenance of live herds allows for selection and improvement of populations within the sustainable constraints which will be discussed later (see section 4.3.2). The disadvantages of in situ conservation are brought about by a lack of complete control over the many factors which influence the survival of individuals and therefore the genetic makeup of the conserved population. In situ conservation projects require land and people which are limited resources in some regions of the world. Continuation of all conservation projects is dependent upon unpredictable financial and political change particularly if they are government or institutionally run. They do have the capacity to produce agricultural commodities and sell livestock to supplement their budgets (see Table 6). Genetic drift is an inevitable feature of all live animal conservation projects, even when steps are taken to minimize the problem. Selection and the resultant shift in the gene frequencies within a population are a real possibility, and may even be a legitimate objective of some programmes. Selection is a particular concern when it is applied to populations being maintained under modified environmental conditions and should only be made within locally sustainable conditions (see section 4.3.2). In situ conservation incurs the possible threat of disease eliminating whole, or substantial parts, of a conserved population, particularly if the conserved herd is in a single or only a few linked locations. Diseases may also act as a major selection pressure within a population, and may substantially change its characteristics. Finally, live animal conservation programmes do not assist in the easy international transfer of animal genetic resources as compared to the movement of frozen material. Moving live animals is relatively more expensive and there are international restrictions on the movement of animals to control disease. Cryogenic methods allow for animal genetic resource material to be suspended, unchanged, for long periods of time. Live conservation efforts enable breeds to be properly evaluated, monitored and used in the present changing agro-economic climate as well as being available for future farmers and livestock breeders. The two strategies are not mutually exclusive and should be considered as complimentary strategies which may be easily and beneficially linked. Economic Production and Recreational Uses Arising From Live In situ Conservation Projects |Direct production of food||×||×||×||×||×| |Dams for crossing for meat||×||×||×| |Production of furs||×||×| |Production of wool/fibres||×||×||×| |Use in prison, school and hospital farms||×||×||×||×||×| |Pasture and lawn management||×||×||×| |Utilization of harsh environments||×||×||×| |Utilization of marginal areas||×||×||×||×||×| |Sera for research||×||×||×||×||×| |Non-allergic milk production||×||×| |Veterinary or medical research||×||×||×||×||×| |Education Sport and Leisure| |Aid to education||×||×||×||×||×| |Sport and leisure||×| |(after Maijala, 1986)| Collecting and freezing of semen is far simpler in most species than collecting and freezing of embryos. Recent development in the technology to mature ova from the ovaries of slaughtered females has produced a relatively cheap and easy method for the collection of haploid cells from females to parallel the collection of sperm. It is likely that this technique will become increasingly useful as the methods become more widely available. Maintenance of semen alone does not allow for the recreation of a pure breed, but only for ‘breeding back’ through a crossing programme but, by using a large sample of semen from different males alongside a relatively small population of live females it is possible to maintain an entire population of animals. The use of artificial insemination (AI) in situ conservation of populations enables a much larger number of males to be used in the breeding programme than would be practical if they all had to be maintained as live adult males. This automatically increases the effective population size (Ne) and reduces the minimum total number of live animals that must be maintained to produce an acceptable level of inbreeding and genetic drift (Smith, 1977). This strategy could be particularly useful in species or varieties where the technology for collecting and freezing of ova or embryos is not well developed or available, for example with pigs and poultry; or for endangered species where no alternative host for preserved embryos could be used as is the case with the Indian elephants for example. Use and replenishment of frozen semen collections alongside a live population will enable breeds to adapt to gradual and permanent changes to the environment. It will allow for the changes necessary to respond to a background of disease and parasites which will gradually mutate over time. It will also allow for current and accurate data to be collected and maintained on breeds. Over time disease control, nutritional knowledge and veterinary care will improve. It is therefore important that breeds are monitored in these changed circumstances and are not continually judged by their production characteristics as measured in less developed situations in the increasingly distant past. The conservation of endangered species, breeds or populations is an attempt to maintain genetic resources in an identifiable and potentially usable form. Endangered species must be conserved in separate species units because it is not possible to out cross or pool different species. Endangered breeds of the common species, however, may be maintained as separate ‘breeds’ or may be combined or pooled into groups of breeds or composites for the purposes of conservation. The advantage of conserving a distinct separate ‘breed’ is that it has a defined set of characteristics and parameters. Its appearance, behaviour, production, and native environment should all be known or can be determined. A breed represents a group of animals with a known range of genetic variation with predictable and characteristic effects. Such a population can be screened for undocumented characteristics in the future and desirable genes can be accessed through conventional breeding techniques or genetic engineering. The disadvantage of conserving ‘breeds’ separately, is that there are a very large number of them, and that many have very similar characteristics. The conservation of genetic variation in a gene pool or breed composite requires less resources than individual breed conservation. Many breeds may be combined into a gene pool of a size considerably smaller than that required for separate programmes. There are, however, a number of serious disadvantages with gene pool conservation; A well described breed is by definition predictable in its appearance and production, while a gene pool or composite population is not predictable in the expression of those characteristics. The identification of animals carrying a specific gene within a composite may be impossible to determine because expression of the gene may become masked by alternative alleles found in the other breeds in the composite. This may be the case even if the presence of the gene in the pool is known, because it was a feature of one of the breeds included in the original breed mixture. Valued genetic characteristics may be caused by the interaction of a number of genes always found within one breed. In a gene pool such complementary genes may be separated resulting in the disappearance of the valued characteristic within the composite. This could be the case with some forms of parasite resistance for example, where a physiological adaptation might be linked to dietary preferences or behavioural characteristics. Thus there may be unpredictable genetic interactions between breeds resulting in the disappearance of expected characteristics or the appearance of new unexpected ones. It is generally believed that a composite conservation population may be considerably smaller than the total size of separate breed programmes for each of the breeds maintained independently. Such a strategy may result in a larger number of the pre-existing genes being lost over time through drift. Gene pools or composites can be used effectively to conserve genes that affect obvious morphological features which can be easily identified. For example, colour or extreme quantitative traits such as the prolific Boroola gene associated with litter size in sheep. However, such individual traits can be equally well preserved in many species, by cryogenic techniques. Pooling of breeds should never be considered until the separate breeds or identified populations have been properly characterized. The gene pool is not a useful strategy for the conservation of very varied populations. It may be used for the conservation and selection of a number of closely related breeds with economically important traits, whose physiology and adaptive characteristics are similar. For example, there are four recognized breeds of goat in the desert region of North Eastern Brazil. Each comes from a slightly different area and are distinguishable by their colour patterns. The Moxoto are light cream with black points, the Rapartida are cream with dark forequarters, the Caninde are black with a yellow belly and the Morota or Curaca are solid white in colour. However, beside the name and colour differences initial research suggests that their environment, size, growth rate, production and survivability are very similar (Mariante, 1991). If this is confirmed a more effective conservation, selection and improvement programme could be developed by pooling the breeds rather than maintaining separate strains. When gene pools are deemed to be beneficial no more than two or three populations should be combined in order to keep the frequency of most the alleles at a useful level. Before sampling can begin clear objectives must be defined with respect to the objective of the programme. In particular consideration must be given to whether the programme is to conserve unique genes within the population or the breed itself (see section 3.2). a. Sample Size As a general rule the larger the sample or founder group, the greater the range of genetic variation that will be incorporated into the conservation programme. Where the conservation herd is to act as a nucleus which will interact with other farm or village herds, the sample may not be finite. In this case exchange of genetic material will be possible between herds in the future. In breeds where the conservation herd is likely to be all that will survive of the breed, it is essential that as many founders as possible are included. In this case no more diversity can be maintained than is included in the initial sample. Relatively few unrelated individuals can represent a considerable genetic diversity. The chance that a population sample of size N, will not contain a gene whose frequency in the population is p, may be expressed as (1-p) to the power of 2N. Thus in a reasonable size sample there is a good chance that all the available genes will be included unless they were at a very low frequency in the original breeding population. A sample of 25 males and 50 females is recommended as a minimum for a live conservation programme. This has been calculated to result in a loss of less than 1% of the possible genetic variation present in the original population (Smith, 1983). It has also been shown that even quite small founder groups of less than 10 females can survive and produce viable living populations. It is possible to ensure the survival of almost all the genetic variation present in such small founder groups with a carefully planned breeding strategy (see 4.4). In such cases it is advisable to avoid future or frequent bottle necks in the population size, as this will inevitably result in a dramatic reduction in genetic variation and possible extinction. The optimal strategy for conservation is then to increase the population size as rapidly as possible. |Gene Frequency||Number of sires at a probability of| The number of sires that must be sampled to reduce the probability of an allele with the frequency P being excluded from a sample semen store to below 0.01 or 0.001 (after Notter and Foose, 1987). Once a population has reached its holding size, if one is to be imposed, it is important to design a programme to minimize selection, inbreeding and drift so as to maximize the survival of the genetic diversity found within that population and its chance of survival along the same lines as for the conservation of any small population (see section 4.4). b. Statistical Sampling Techniques Methods of sampling fall into three major categories; random, stratified and maximum avoidance techniques. A random sample is one in which every animal has an equal chance of being selected for the sample as every other animal. By definition the sampler has no control over the specific animals selected and can make no judgements about typical or atypical animals being included or excluded from the sample. Relatively small samples collected in this way from populations in which there is a lot of genetic diversity, may result in a shift in gene frequency between the initial population and the sample population, due to chance. By dividing a population into strata, or groups of similar animals and then sampling the various groups at random, it is possible to ensure a reduced shift in genetic makeup between the sample and the original population. In situations where the pedigree of the animals within the population is known, it is possible to create a sample which represents the largest possible number of ancestral or founder animals. In this case animals will be selected for inclusion in the sample because they share no common ancestors. No common ancestors is normally taken to mean no common ancestors as parents, grandparents or great grandparents. c. Practical Sampling Techniques The principal objective in sampling a population to create a conservation unit, is to attempt to include as much of the genetic variation inherent within that population as possible. Thus animals should be selected from throughout the breeds normal geographical range and should incorporate all the characteristics normally associated with the breed. Furthermore, when breeding records are available, closely related animals should be avoided in order to make room for unrelated individuals. The major problems associated with sampling a breed in order to establish any kind of conservation programme are: availability of suitable stock due to disease, political restrictions and ownership; conformity to breed description; and degree of dilution by other breeds. Not all groups of animals that constitute a breed or identified population are equally available for sampling for inclusion in a conservation programme. Some geographical regions or individual herds may have endemic diseases not found on the conservation farms or co-operating regions. Some herd owners may not be willing or able to participate in supplying animals to a conservation programme. Finally part of a population may be situated on the other side of a political boundary making access difficult or even impossible. In each of these cases the subpopulations excluded from the programme should be very carefully considered with respect to their potential contribution to the programme. If possible it should be determined how different the subpopulation is to the available one by careful comparison of: the environment, which may be more or less extreme; visible phenotypic differences; simple physiological differences; such as blood types or milk proteins; or differences between the structure of the chromosomes or DNA fragments. Any of these studies might suggest that the subpopulation represents a unique and important subsection of the population. If this is not the case the subpopulation may be omitted from the conservation sample. However, if they are deemed to be important, action may need to be taken to establish disease control or parallel conservation efforts. A pure conservation scientist will look at a conservation programme in terms of maintaining the maximum genetic variation possible within the conservation flock or herd. This conflicts to some extent with the value of conserving a breed with its predictable and known set of characteristics in a known environment. Sampling, for a programme whose objective is ‘breed’ conservation, should reflect this breed description and samples should be taken from within the known parameters of the breed. Individuals exhibiting extreme characteristics might be included in a frozen store with the appropriate information in the associated database. Animals on the fringe of the breed parameters will often be the result of cross breeding with exotic or introduced breeds in the relatively recent past. There can be serious practical problems associated with historically well defined breeds which have been extensively diluted. In many cases a limited number of animals may remain which exhibit some of the characteristics which were typical of the old breed. In this case it may be possible and desirable to define parameters for the conservation group, increase the population size as rapidly as possible, and then select from within the group to eliminate the ‘foreign influence’ and re-establish a new population which exhibits as much as possible of the original breed characteristics. For small populations selection should be imposed on males only, to reduce the risk of inbreeding. This strategy has been employed with the Cikta sheep in Hungary (Bodo, 1984) and similar programmes have been developed for the Navajo-Churro in the USA (McNeale, 1970). It is often said that within the conservation of small populations no selection pressure should be imposed because it would reduce the levels of genetic diversity intrinsic within the population. In practice this is an impossible restriction to place on any conservation programme. Selection at some level will inevitably occur in all live conservation programmes and is essential in order to maintain the characteristics of the population. Wild species are maintained by natural selection and it is recognized that in order for these species to have a good long term survival chance natural selection must be allowed to continue to act upon them. One of the problems of conserving wild species in captivity is that there will be drift in the genetic makeup of the population due to the lifting of natural selection pressures. For example albinoism is a rare recessive gene found in many populations of mammals. Because the gene is rare in the population, individuals exhibiting the albino characteristics are extremely rare. They also stand a much lower chance of surviving into adulthood and reproducing because they are much more likely to be predated than their camouflaged fellows. This fact ensures that the gene remains rare. In a captive situation where there are no predators an albino individual is far more likely to survive and to reproduce, thereby passing its albino genes onto the next generation. The frequency of the albino gene will therefore increase within the population and consequently so will the chance of albino individuals appearing in future generations. This is a very obvious and visible example of genetic drift due to the lifting of natural selection pressure, but it demonstrates the subtle shifts in genetic characteristics which may occur in populations protected from normal selection pressures. The selection pressures imposed by the environment and by man that have created domestic breeds and populations are equally important and are discussed in chapter 2 of this manual. In the case of naturally selected characteristics, it is just as important for domestic populations as it is for wild species to be able to continue to exist and adapt within their normal environment. For example, breeds adapted to climatic extremes, unusual diets or heavy parasite or disease infestation should continue to exist under this selection pressure. Animal welfare issues should clearly be taken into account. Allowing natural selection to work does not imply non-intervention and does not require that non-adapted animals be left to slowly die. It does provide the opportunity for the selective culling of those individuals which are clearly not functioning as well as would be expected in the breed's normal environment. Selection within breeds by man is not fundamentally different from the approach to natural selection described above. In projects established to conserve unique genes or characteristics the appearance of atypical animals may be considered beneficial. If a population has been developed for its own particular production characteristics, for example its wool, milk or draught power, and the objective is ‘breed’ conservation, the population should be conserved with these breed specific characteristics. Thus a limited amount of selection should be an integral part of breed conservation. This selection should be targeted at maintaining the known characteristics and parameters of the breed. It should not be used to reduce the genetic diversity found within a breed being conserved in a small population, but rather to limit the effects of individual outstanding or unusual animals and to prevent traits previously alien to the breed becoming common. The objective should be to conserve an identified group of animals with known parameters. This should not be conservation of just a colour pattern or horn shape, or the conservation of a breed name attached to a herd which has long ago lost the characteristics for which the original breed was valued. The important general features of such selection are: Selection should not be carried out in very small populations where inbreeding may be a problem. The population should first be allowed to increase in size (see 3.3.4). Most livestock breeding programmes involve the use of more females than males. Selection may be imposed on males whilst maintaining the influence of as many founders as possible through the unselected females. Selection should be carried out within the adaptive environment and should be against characteristics which prevent the animal from functioning well in that environment, or from exhibiting the production characteristics typical of the breed. Selection for or against features relatively common within a breed should be considered very carefully. Advantageous characteristics may be positively selected in the context of conservation, but this should be done in larger programmes as described in section 4.6 of this manual. Selection against so called ‘undesirable’ characteristics common to a breed should only be carried out once the real affects and interactions of these characteristics are known. Congenital splitting of the upper eyelid in multihorned breeds of sheep, for example, has been shown to be closely linked to the genes causing the development of the impressive four horns. Selection against the eyelid condition, which has no selective disadvantage in the natural environment of these sheep, resulted in a dramatic reduction in the frequency of four horned individuals within the UK population (Henson, 1981). Similarly selection was carried out among the seaweed eating sheep of the Orkney Islands to remove monorchid ram lambs which were found to be very common. This selection was done without first identifying if the condition was historically common, why it was so prevalent within the population and if it was linked in any way to the breeds remarkable ability to survive on a diet of the seaweed laminaria. This breed is considered in more detail in 5.4.2. A similar type of selection has been reported in the Ethiopian programme to supply drought resistant bulls of the indigenous cattle to the devastated regions of Eritrea. These cattle are the only animals that will survive in the region and the project is an excellent one. However, one of the characteristics used to select bulls for redistribution was ‘a straight back’ (Relief Society of Tigray, 1986). Although a feature used to select European cattle it is not a characteristic known to be linked to the ability to survive in extreme drought conditions. Selection, if it is to exist in conservation herds must, therefore, be justifiable with respect to the important and locally valued features of the breed. Inbreeding is the mating of closely related animals and results in an increase in homozygosity. It reduces the amount of genetic variation within the population as compared to an outbred population. The chance that closely related animals carry the same mis-copied, non-functional or deleterious pieces of DNA inherited from a common ancestor is quite high. Inbreeding will tend to result in more homozygous animals which have inherited two copies of the less efficient gene. For this reason the general affects of continuous inbreeding are seen as a reduction in fertility and viability particularly with respect to survival after birth and growth rate to weaning (Falconer, 1981; Lasley, 1978; Warwick, 1979; Wright, 1977). Conversely mild inbreeding combined with intensive selection can be used to improve livestock breeds so that superior animals with more effective genetic characteristics have more influence over future generations than inferior ones. In many important developed breeds more than 80 or 90% of the population can trace their pedigrees back to one or two superior individual animals. This selection combined with low level continuous inbreeding concentrates the desired genes and enables deleterious genetic characteristics to be selected out. This method of continuous low level inbreeding and elimination of deleterious genes has resulted in domestic populations that can withstand much higher levels of inbreeding than wild species which are not normally exposed to inbreeding pressures (Frankel & Soule, 1981). In small populations inbreeding can be controlled by careful planning of the breeding strategy but it remains a function of small population size. In situations of very small populations where close inbreeding is the only option it is better to mate brothers and sisters than parents to offspring. The inbreeding coefficient is the same in both cases but sib matings help to equalize the genetic contribution to the next generation from the two parent lines. In this situation the objective is to carry as much genetic variation from the founder group into the next generation as is possible (see section 4.4). Within small populations inbreeding is a function of the population size, because the chance that any two individuals mated together will be related, or share common ancestors, is increased. If a strategy of random breeding is assumed within a population, it is possible to estimate the rate of inbreeding, which will vary according to the number of breeding animals in the population. The rate of inbreeding (δF) in a small population is calculated as ΔF = 1/2Ne where Ne is the effective population size. The effective population size is affected by the ratio of males to females, longevity, and variance in family size (see appendix 3.2). In turn, the rate of inbreeding reflects the drift in genetic variation within the population. Genetic drift in a small population is the loss of genetic variation through random chance. There may be a number of different DNA options at one address on the chromosome which code for a number of different possible phenotypic characteristics. For example blue, brown or green eyes. If there is no selection pressure the likelihood that any one option will be passed onto the next generation is affected by random chance. In a very small population this may result in the frequency of the options ‘drifting’, by chance, until they will become fixed at the frequency of one or zero in the population. The percentage of genetic diversity conserved over time decreases rapidly with smaller population sizes (see fig. 2). Thus the amount of heterozygosity or genetic variation present in each generation begins to decrease at an accelerated rate once the effective population size (Ne) falls below 100. The level at which a small population conservation programme can be established is determined primarily by the rate of inbreeding, or the rate of loss of genetic diversity, which is considered to be acceptable over a specified period time. An absolute minimum effective population size of 50 is considered necessary for the survival of zoo populations of wild species where breeding strategies can be very closely controlled (Frankel & Soule, 1981). The ratio of males to females is very important in the calculation of the minimum of animals needed to conserve a population. This is based on the effective population sizes, or the number of animals contributing genetic material into the next generation (see section 3.3.2). Small effective population sizes result in an increase in inbreeding which results in a loss of heterozygosity or genetic diversity (see section 4.3.3). Programmes to conserve endangered domestic animals have been proposed that would result in inbreeding rates of between 1 and 4% per generation. It is possible to maintain inbreeding rates at this level, with populations of between 12–25 males and 100–250 females. It has been estimated (Smith, 1984) that the following minimum number of animals are required for the conservation by management of endangered breeds of the common agricultural species. These estimates take into account the number of males and females in the breeding unit and the number of young replacement males and females joining the population each year. They may be taken as absolute minimum for the maintenance of a conservation herd and require carefully planned breeding programmes. Calculations of minimum population size and loss of genetic variation are theoretical, although Wright's inbreeding coefficient, central to all the calculations, does not always correlate with the observable affects in particular populations. This observable difference in real populations is due to the actual number of common ancestors in the group and the severity of the particular deleterious genes they happen to carry. The American Association of Zoological Parks and Aquaria ‘Species Survival Plans’ for endangered species similarly incorporate minimum population sizes with a breeding strategy which requires the replacement of individual animals by their offspring in a controlled and planned way, in order to maximize the use of limited animal spaces in the zoo programme. |No.s males||No.s females||Total||Ne||% inc in inbreeding/generation assuming random mating| |(After Brem, FAO, 1989)| |Size of breeding unit||10||26||22||60||44||44||72||72| |No. of breeding animals entering each year||10||5||22||12||44||18||72||72| In these programmes an effective population size of 500 is considered to be an absolute minimum for the long term survival of a conserved population. This may be obtained theoretically with an actual number of only 250 animals if the sex ratio is held constant at one to one and every animal contributes equally to the next generation (Franklin, 1980). Begin with an adequate sized sample of animals who should ideally be unrelated, non-inbred and fertile. They should represent the range of genetic types found within the population. If possible a sample of at least 50 males and 50 females should be included. Expand the population as rapidly as possible, to a minimum effective population size of 500 animals (see section 4.3.5). Standardizing the longevity. Equalize the representation of the founders, (i.e. the animals in the original sample). It is important that as many of these founder animals as possible are represented in each generation. Manage inbreeding, in most cases the best strategy is to keep inbreeding to a minimum. There are situations in sublined populations where alternative strategies might be chosen (see appendix 5.4). Subdividing the population may be a useful option (see section 4.5.5). In particular this strategy may help to control the possible spread of disease between conservation herds. The three principal methods for establishing a conservation breeding programme for small populations are based upon the methods of natural breeding, random mating and pedigree breeding. The natural breeding strategy adopted by wildlife conservation programmes involve ensuring that sufficient animals exist in the conservation area to allow normal mating structures to exist. Thus territorial or harem behaviour is allowed to proceed as it would in a normal wild population, such that the strongest and best adapted males mate with the majority of the females. Intervention may take the form of removing older males after one or two breeding seasons to ensure that younger individuals have the opportunity to reproduce. Action may also be taken to transport males from one ‘island’ reserve to another in situations where wildlife sanctuaries are divided by inhospitable or impassable man made obstacles. This strategy of conservation is also used on the conservation of feral or very extensively managed domestic populations. The conservation of Ossabaw Island hogs on Ossabaw island in the USA (Brisbin, 1985) and the primitive ancestral flock of Soay sheep on the Island of Hirta (Jewell, 1974) are good examples of this method. These strategies require larger minimum population sizes than those discussed in 4.3 above, and should only be considered for populations of many hundred individuals in situations where most of the males will be allowed to remain with the female herds. The ancient wild cattle of Chillingham have survived for seven hundred years with a very limited population size and a natural breeding structure although there is evidence from blood type examination, of considerable homozygosity within the breed (Henson, 1983). The random mating system is designed to ensure that each adult animal has an equal chance of leaving an equal number of progeny. Statistically the progeny numbers fall into a Poisson distribution. Models can be developed to randomly select mates. This method has been used unsuccessfully for small populations of poultry where 40 pairs of parents per generation has been calculated as acceptable. In practice, Professor Crawford in Canada has maintained 17 middle level poultry stocks using a random mating strategy for a period of up to 24 generations. The populations are 11 hens, 1 turkey, 1 guinea fowl, 1 duck, 1 Muscovey duck and 2 goose lines. No significant reduction in fertility or hatchability which might be associated with inbreeding, has been observed. All birds are bred at one year of age to minimize the effects of selective mortality. The principal problems encountered with this system of random mating has been the variation between adults or between families in a number of survival characteristics which include; All of these variables can quickly skew the distribution and accelerate genetic drift. This method of breed conservation is ideally suited to groups or herds of animals which can be easily identified as a group and where matings can be controlled between groups, but where individual identification of animals is not practical. By monitoring the pedigree of animals in a conservation breeding programme it is possible to ensure that each animal or family contributes equally to the next generation. Within a population of fixed size, each male can then be sure to contribute one son to the next generation and each female, one daughter. In wildlife conservation programmes for zoo animals an effective population size of 500 is the target for long term conservation of zoo populations. This can be achieved with an actual population of only 250 animals if the sex ratio is held constant at one to one and each animal contributes equally to the next generation (Franklin, 1980). Each animal produces at least two litters by two different mates. No more than one offspring will be selected for the breeding programme from each litter. Each male will be replaced in the breeding population by one son, each female by one daughter (Foose, 1983). If carried out correctly this strategy confers negligible genetic loss over an extended period of time. The problems are practical ones. Not all animals breed equally well in captivity and some will not mate with their selected partners. There is also a problem with the public's perceptions of conservation which is not compatible with the elimination of individuals born in a litter but not needed for the breeding programme. This is particularly difficult in zoos where the public constitutes a major source of funding. It is a less serious problem in the conservation of agricultural animals where the surplus can be used for human consumption in the normal way. The basic structure of random mating within pedigree lines has been effectively used for domestic breeds and in particular for poultry (see table 11). This system involves considerably more time, more expensive equipment in terms of individual cages and more skilled technicians for artificial insemination than the basic random flock system described in 4.5.2 above. However, it should result in lower inbreeding coefficients and far less genetic drift than the basic random system. In practice the system has shown lower fertility than the flock system due to technical and practical problems with the use of artificial insemination. It has also brought to light a serious infertility problem in one female line of one of the breeds which was masked in the random flock system (Crawford, 1989). Pedigree systems are more effective in monitoring inbreeding and therefore loss of genetic variation over long generation intervals. They also supply important relationship information to help monitor genetic diseases or defects that may occur. The random breeding strategy described above helps to ensure that the chance of variation at any one genetic loci being lost, is kept to a minimum. However, once pedigree records are available the optimal strategy to minimize inbreeding from one generation to the next is through the maximum avoidance method. This involves selecting mates which are the least related to one another. It is particularly effective in maintaining low inbreeding coefficients and thus high levels of genetic variation in very small populations. In such cases it should be used in conjunction with an overall strategy to increase population size as rapidly as possible. Once groups are larger a more effective method of maintaining maximum genetic variation over a longer period of time, may be to sub-divide the population and use a cyclic breeding system. When a population is divided into separate sublines each of the sublines will, by definition, be smaller than the original group. Each group will then have a higher rate of inbreeding, and a higher rate of loss of genetic variation due to drift, than the total population group as a whole. However, the random chance that the same genetic variation will be lost in all the sublines is very low. It is, therefore, postulated that a population can successfully be divided into sublines. A maximum avoidance breeding strategy can then be adopted within each line. Periodically there can be an exchange of males, probably in a cyclic rotation between the sublines. In this way any degree of dividing into sublines, will ultimately reduce the final rate of decline of heterozygosity. It has been postulated that a practical method for achieving this might be to inbreed within subdivided lines for 8 to 10 generations and then outcross between populations. The important feature of all these proposals remains the actual size of the population and therefore the rate of genetic loss, and the level of inbreeding the population can withstand. There are also practical restraints associated with unequal sex ratios and the overlap of generations (see appendix 5.4). The strategy results in the conservation of a number of inbred lines with very little genetic variation within each one, but assumes that the total variation will survive in the total population of sublines. However, inbred lines are often inferior to non-inbred ones in terms of resistance to disease, reproductive success and lifespan. The strategy has been postulated for zoo populations where the method of maintaining the entire captive population under a minimum inbreeding programme has been compared to alternating mild inbreeding within zoos, combined with outcrossing between zoo populations. The latter strategy has the advantage of less movement of animals which helps in lower costs, animal welfare and the control of disease between the various breeding groups (see appendix 5.4). Dividing into sublines is not recommended for very small populations where the size of the sublines would be so small that the inbreeding coefficients would rise too quickly and jeopordize the survival of the sublines due to reductions in fertility and viability of offspring. In practice sub-dividing methods have been used in the conservation of the Hungarian Grey Cattle in Hungary and the Lipizzan Horse in Austria (Bodo & Pataki,. 1984). The most effective method of conservation is that which involves the full utilization of a breed. There are many instances where breeds have become endangered, but through proper characterization and evaluation a role has been identified. In some cases the new role has been in another country, for example the prolific Finnish Landrace sheep which are now more numerous outside Finland than within the source country. In other cases it has been a regenerated home market resulting from proper research and improvement of a locally adapted breed. For example the Criollo cattle of Bolivia which were being systematically replaced by Zebu crosses. Research into the real production characteristics and survivability of the breed has resulted in a renewed interest in the Criollo which has a long term commercial role in the cattle industry of that country (Wilkins, 1984). Conservation and utilization programmes require careful and accurate evaluation of breeds in their own local situation as compared to exotic breeds and their crosses in the same sustainable conditions. Where breeds emerge as having a real local potential they should be incorporated into proper evaluation and improvement programmes either on station or in the field situation. The most important factor of all improvement of conservation groups is that selection is carried out in the environment to which the breed is adapted and that it is for traditionally valued characteristics. These characteristics might be foraging ability, mothering ability, ease of parturition, draught ability which might include stamina, strength, food conservation for draught or tractability, it might be for rich milk for cheese or butter making, or fat meat to supplement a low fat vegetable diet, or for soap making. Western European selection criteria should not be imposed on populations unless they are clearly appropriate. Most important of all, selection must be under the local sustainable conditions. The herds must be managed within the natural environment for that breed and need to be exposed to the conditions prevalent in the field situation. Thus treatment for a disease or parasite to which the breed has some natural resistance should not be given unless the same treatment is freely and practically available in the field. Disease and parasite resistance is biologically expensive. Thus if these heavy selection pressures are lifted, and production pressures imposed the population will very quickly shift such that they will produce more milk or meat but will lose the genetically controlled resistance for which they were valued. The same is true of heat tolerance, drought resistance or the ability to survive on diets with a very low nutritional value. There are a number of African programmes designed to conserve and improve disease resistant strains, including those with trypanotolerant N'Dama cattle in the Republic of Guinea (Devilliard, 1983). There are also village based projects beginning around the world including the Jamnapari goat project in India (Bhattacharya, 1990) and work with the heartwater resistant Tswana sheep, goats and cattle of Botswana (Setshwaelo, 1989) both of which are discussed in chapter 5 of this manual. Projects for conservation, utilization and improvement of breeds should attempt to lay down periodic samples of cryogenically stored material as a long term insurance policy. They must clearly define their objective which should incorporate the conservation of those characteristics for which the breed has been traditionally valued. They may then take the form of a number of cooperating farms in a male progeny testing scheme, as with the Sahiwal cattle project in India discussed in chapter 5. More frequently they are associated with open nucleus type breeding strategies. For a nucleus herd of 200 cows the best 20 females will be selected from the villages each year and some 20 will die or be culled from the nucleus herd. This system will produce a moderate rate of improvement although in practice the village farmers are not often willing to give up their best females to the programme and the nucleus herd does not produce any better than the average for the village herds. This is in part due to the fact that the village farmers don't give up or sell their best cows, but may also be due to a difference in management. It has been found in India that breeds developed over centuries in a very small herd situation with very close interaction between farmer and animal do not take well to a large herd situation with little or no association between individual animals and people. There was therefore an initial reduction in production level from the institutional farm over the average for the village herds. A larger nucleus of 1,000 trypanotolerant N'Dama cattle has been proposed for the republic of Guinea (Devillard, 1983) which would produce a far greater expected rate of genetic improvement and produce superior males and females for the co-operating farmers as well as surplus stock for sale to farmers within Guinea or for export to neighbouring countries. The problems lie in the maintenance of such a large herd under sustainable village conditions, and convincing the Fulani herdsmen to sell or lease their best cows to the project. There are also some problems associated with obtaining accurate details of the production in the field situation. For example when monitoring milk production and growth rate of calves, it is essential to know if the cow is also being milked for family milk consumption, and if so, at what level. There is an argument to use research stations to research into possible management improvements that could be practically used by the farmers, optimal dipping period, feeding regimes, feeding supplements, weaning times and so on. Selection and monitoring of breeds can then be done using national resources and large numbers of co-operating farmers. Similar farmer co-operative methods are being used to monitor and improve the production of indigenous stocks throughout the world. Some examples may be found in chapter 5. 7. A national conservation and improvement programme for Tswana goats, which are resistant to the tick borne heartwater disease has been established in Botswana. 8. Panteneiro cattle, adapted to the extremely hot and humid conditions of the Pantenal region of Brazil. This breed is now being researched and characterised by the National Agricultural Research Institute in Brazil.
The term reinforce means to strengthen, and is used in psychology to refer to anything stimulus which strengthens or increases the probability of a specific response. For example, if you want your dog to sit on command, you may give him a treat every time he sits for you. The dog will eventually come to understand that sitting when told to will result in a treat. This treat is reinforcing because he likes it and will result in him sitting when instructed to do so. There are four types of reinforcement: positive, negative, punishment, and extinction. The most common types of positive reinforcement or praise and rewards, and most of us have experienced this as both the giver and receiver. Negative reinforcement is taking something negative away in order to increase a response. Punishment refers to adding something aversive in order to decrease a behavior.When you remove something in order to decrease a behavior, this is called extinction. You are taking something away so that a response is decreased
ESRI, 2007. Adapted with permission. Hurricanes are among the most common and most destructive types of natural hazards on Earth. Because they occur across space and time, hurricanes can be better understood using maps, particularly digital maps within a Geographic Information Systems (GIS) environment. GIS allows you to use maps as analytical tools—not maps that someone else has made—but using your own maps to make decisions. - A computer with Internet access 1. The National Oceanic and Atmospheric Administration provides a database historical hurricane tracks. Access https://coast.noaa.gov/hurricanes/ and search hurricanes by year. 2. While the map works well for displaying hurricane data by year, you may wish to download a spreadsheet to answer question 4. You can download a large spreadsheet containing all the data you will reuqire here https://www.ncdc.noaa.gov/ibtracs/index.php?name=wmo-data. 3. Discuss: Why do hurricanes move in the direction that they do? Which hurricanes don’t fit the pattern? 4. Map hurricanes from some different decades. Discuss: Are hurricanes becoming more frequent with each passing decade? How does the wind speed change as each hurricane moves across the ocean and across land? 5. For the next part of this activity, download ArcExplorer Java Edition for Education (AEJEE), from http://www.esri.com/software/arcgis/explorer/arcexplorer. 6. Data from National Atlas are bundled with a more detailed version of this North Atlantic hurricane activity, freely available on the ESRI ArcLessons library at http://www.esri.com/arclessons. Search on “Hurricanes” or the ID of 299. 7. After accessing AEJEE, select the Add Data button to add the hurricanes, boundaries, and cities map layers. Using Tools, then Query Builder, select Hurricane Andrew (Name=’Andrew’). Describe the path of Hurricane Andrew. How long was its path, and how many days did this hurricane last? 8. Study the Wind_ MPH and Pressure fields. Discuss: What is the relationship between wind speed and pressure? Why? What was the wind speed and pressure when Andrew became a hurricane?
One litre of sea water contains about one billion bacteria. This represents at least one thousand species, in addition to the single-cell organisms different from bacteria—referred to as protists—which make up plankton, according to Daniel Vaulot, a researcher at the Station biologique de Roscoff, located in the Brittany region of France. Studying each of these organisms by mass-sequencing their genome could lead up to discover new species. It could also help study species potentially interesting for fundamental research on the origins of life and climate change, or for applications in the industry. Raising the awareness of the possibilities of marine genomics among the wider research and industry communities is precisely what the EU-funded Marine Genomics for Users (MG4U) project is designed to do. Its coordinator, Bernard Kloareg who is the director the Roscoff station, is himself an advocate of marine genomics. The types of technologies that the project is attempting to showcase include metagenomics, which has been used extensively since the 2000s. Few of the organisms found in sea water samples can be cultivated in the laboratory to extract enough DNA for genome sequencing. Instead of deciphering each genome one by one, geneticists had the idea to mass-sequence the whole sea water sample in a single run. This involves cutting DNA extracted from the sample into thousands of small fragments. These are then processed by high-throughput sequencing machines. As the DNA originating from each individual of a given species is randomly cut, the fragments overlap and longer sequences can therefore be reconstructed. This technique does not allow deciphering complete genomes. But it could help to detect unknown genes possibly belonging to new species. It can also be used to assess the presence of well-documented genes in the samples. “To do so, researchers need to compare their results with huge databases of genetic sequences”, Vaulot tells youris.com. Originally, metagenomics helped marine biologists to study the relationship between genomes and their environment and to discover metabolic processes relevant to applications. For example, extensive sampling campaigns have been carried out, by expeditions such as Tara Oceans, during its round-the-globe sail from 2009 to 2011. Subsequent programme Oceanomics is planning to sequence the Tara samples at the Genoscope facility in Evry, France. These samples may contain bacteria or algae, which host enzymes able to degrade or synthesise molecules of interest in the field of pharmaceuticals, biofuels, etc. By transferring the genes coding for enzymes of interest, identified as a result of metagenomics, into standard bioprocessing bacteria contained in bioreactors, these molecules could be produced on an industrial scale. But this is not so simple. Although the potential applications brought by marine metagenomics are real, they have not yet delivered significant innovations. “Marine metagenomics is not as developed as 'terrestrial' metagenomics used, for example, in the field of human health, because investments in marine research have been lower. So far, it has not yielded a molecule that became a blockbuster,” comments Patrick Durand, head of the Biotechnology and Marine Resources Unit at Ifremer, the French research institute for exploitation of the sea, based in Nantes, France. He tells youris.com: “It may take a while before it happens.” Indeed, even though industries such as the biotech sector are avid of novel applications, metagenomics may be the kind of tools required to bring them one step closer to finding the solutions they seek. “The biotech industry is looking for enzymes that will be combined to synthesise artificial compounds on an industrial scale,” says Jürgen Eck, CEO of a bioactive compound discovery biotech company called Brain, which is based in Zwingenberg, near Frankfurt, Germany. For this purpose, metagenomics is a useful tool to screen biodiversity in search for these enzymes, as the latter are coded by a small number of genes. However, finding actual therapeutic solutions may be much more complex. “Bioactive compounds, like potential anti-cancer agents, are often the product of complex metabolic ways involving several genes,” Eck tells youris.com, explaining that it would make it difficult to clone them into bioprocessing bacteria susceptible to produce desired compounds. He adds: “Improving the traditional approach of cultivating organisms is preferable in this case.” youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content.
Research by post-doctoral fellow Alexander Dececchi challenges long-held hypotheses about how flight first developed in birds. Furthermore, his findings raise the question of why certain species developed wings long before they could fly. Dr. Dececchi, a William E. White Post-Doctoral Fellow in the Department of Geological Sciences and Geological Engineering, used measurements from fossil records and data from modern birds to test the evolutionary explanation for the origin of birds. Dr. Dececchi and his colleagues determined that none of the previously predicted methods would have allowed pre-avian dinosaurs to take flight. “By disproving the idea that the predicted models led to the development of flight, our research is a step towards determining how flight developed and whether it can evolve once or developed multiple times in different evolutionary lines,” he says. Dr. Dececchi and his colleagues examined 45 specimens, representing 24 different non-avian theropod species, as well as five bird species. After determining some critical variables from the fossils — such as body mass and wing size — they used measurements from living birds to estimate wing beat, flap angle and muscular output. These values were used to build a model for different behaviours linked to the origins of flight such as vertical leaping and wing-assisted incline running (WAIR) — a method of evasion for many ground-based modern birds that has become a favoured pathway towards the origin of flapping flight in the paleontological literature. They also tested if any species met the requirements to take-off from the ground and fly under their own power. “We know the dimensions and we know how modern birds muscles and anatomy work,” Dr. Dececchi says. “Using our model, if a particular species doesn’t reach the minimum thresholds for function seen in the much more derived birds — such as the ability to take off or to generate a certain amount of power — it’s safe to say they would not have been able to perform these behaviours or fly.” The researchers found that none of the behaviours met the criteria expected in the pathway models. In fact, they found that almost all the behaviours had little or no benefit, outside of those species which evolved right before the origin of birds. When looking at WAIR specifically — the method that has been touted as an explanation for some early wing adaptations — the researchers found that it only was possible in a handful of large winged, small bodied species such as Microraptor, but found no evidence to suggest its use was widespread. Dr. Dececchi says that the group’s findings suggest that wings, even those with large or ornately coloured feathers, could have initially served different purposes rather than flying such as signaling or sexual selection before the development of flight. Dr. Dececchi explains that the question of whether flight evolved once or multiple times in multiple evolutionary tracks is an ongoing topic of debate. Many of the species studied lived tens of millions of years and thousands of miles apart, with a last common ancestor that existed 50 or 100 million years earlier — leading researchers to wonder if flight evolved once but was lost, or if different species stumbled upon the same solution. “There is some evidence that they evolved in parallel — there may be some differences in the details between how each taxon flew, but they tend to converge on these same answers,” says Dr. Dececchi. “That, to me, is one of the most exciting questions that has come out of the past few decades of work in theropods.” T. Alexander Dececchi, Hans C.E. Larsson, Michael B. Habib. The wings before the bird: an evaluation of flapping-based locomotory hypotheses in bird antecedents. PeerJ, 2016; 4: e2159 DOI: 10.7717/peerj.2159
Tech moves fast! Stay ahead of the curve with Techopedia! Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia. Forward DNS is a type of DNS request in which a domain name is used to obtain its corresponding IP address. A DNS server is said to resolve a domain name when it returns its IP address. A forward DNS request is the opposite of a reverse DNS lookup. Forward DNS is also known as a forward DNS lookup. Forward DNS primarily allows a computer, server, smart phone or other end client to translate a domain name or email address into the address of the device that would handle the resulting communication. Although the process is completely transparent for human end users, forward DNS is a functional part of all IP-based networks, including the Internet. Forward DNS works when a user types in the text form of an email address or web page URL. This text is first sent to a DNS server. The DNS server then checks its records and returns the domain's IP address. If unable to locate the domain's IP address, the DNS server forwards the request to another server. Eventually the DNS request is resolved and with the numerical IP address now known, communication can continue.
Newswise — A single-letter change in the genetic code is enough to generate blond hair in humans, in dramatic contrast to our dark-haired ancestors. A new analysis by Howard Hughes Medical Institute (HHMI) scientists has pinpointed that change, which is common in the genomes of Northern Europeans, and shown how it fine-tunes the regulation of an essential gene. “This particular genetic variation in humans is associated with blond hair, but it isn't associated with eye color or other pigmentation traits,” says David Kingsley, an HHMI investigator at Stanford University who led the study. “The specificity of the switch shows exactly how independent color changes can be encoded to produce specific traits in humans.” Kingsley and his colleagues published their findings in the June 1, 2014, issue of the journal Nature Genetics. Kingsley says a handful of genes likely determine hair color in humans, however, the precise molecular basis of the trait remains poorly understood. But Kingsley's discovery of the genetic hair-color switch didn't begin with a deep curiosity about golden locks. It began with fish. For more than a decade, Kingsley has studied the three-spined stickleback, a small fish whose marine ancestors began to colonize lakes and streams at the end of the last Ice Age. By studying how sticklebacks have adapted to habitats around the world, Kingsley is uncovering evidence of the molecular changes that drive evolution. In 2007, when his team investigated how different populations of the fish had acquired their skin colors, they discovered that changes in the same gene had driven changes in pigmentation in fish found in various lakes and streams throughout the world. They wondered if the same held true not just in the numerous bodies of water in which sticklebacks have evolved, but among other species. Genomic surveys by other groups had revealed that the gene – Kit ligand – is indeed evolutionarily significant among humans. “The very same gene that we found controlling skin color in fish showed one of the strongest signatures of selection in different human populations around the world,” Kingsley says. His team went on to show that in humans, different versions of Kit ligand were associated with differences in skin color. Furthermore, in both fish and humans, the genetic changes associated with pigmentation differences were distant from the DNA that encodes the Kit ligand protein, in regions of the genome where regulatory elements lie. “It looked like regulatory mutations in both fish and humans were changing pigment,” Kingsley says. Kingsley's subsequent stickleback studies have shown that when new traits evolve in different fish populations, changes in regulatory DNA are responsible about 85 percent of the time. Genome-wide association studies have linked many human traits to changes in regulatory DNA, as well. Tracking down specific regulatory elements in the vast expanse of the genome can be challenging, however. “We have to be kind of choosy about which regulatory elements we decide to zoom in on,” Kingsley says. “We thought human hair color was at least as interesting as stickleback skin color.” So his team focused its efforts on a human pigmentation trait that has long attracted attention in history, art, and popular culture. Kit ligand encodes a protein that aids the development of pigment-producing cells, so it made sense that changing its activity could affect hair or skin color. But the Kit ligand protein also plays a host of other roles throughout the body, influencing the behavior of blood stem cells, sperm or egg precursors, and neurons in the intestine. Kingsley wanted to know how alterations to the DNA surrounding this essential gene could drive changes in coloration without comprising Kit ligand's other functions. Catherine Guenther, an HHMI research specialist in Kingsley's lab, began experiments to search for regulatory switches that might specifically control hair color. She snipped out segments of human DNA from the region implicated in previous blond genetic association studies, and linked each piece to a reporter gene that produces a telltale blue color when it is switched on. When she introduced these into mice, she found that one piece of DNA switched on gene activity only in developing hair follicles. “When we found the hair follicle switch, we could then ask what's different between blonds and brunettes in northern Europe,” Kingsley said. Examining the DNA in that regulatory segment, they found a single letter of genetic code that differed between individuals with different hair colors. Their next step was to test each version's effect on the activity of the Kit ligand gene. Their preliminary experiments, conducted in cultured cells, indicated that placing the gene under the control of the “blond” switch reduced its activity by about 20 percent, as compared to the "brunette" version of the switch. The change seemed slight, but Kingsley and Guenther suspected they had identified the critical point in the DNA sequence. The scientists next engineered mice with a Kit ligand gene placed under the control of the brunette or the blond hair enhancer. Using technology developed by Liqun Luo, who is also an HHMI investigator at Stanford, they were able to ensure that each gene was inserted in precisely the same way, so that a pair of mice differed only by the single letter in the hair follicle switch—one carrying the ancestral version, the other carrying the blond version. “Sure enough, when you look at them, that one base pair is enough to lighten the hair color of the animals, even though it is only a 20 percent difference in gene expression,” Kingsley says. “This is a good example of how fine-tuned regulatory differences may be to produce different traits. The genetic mechanism that controls blond hair doesn't alter the biology of any other part of the body. It's a good example of a trait that's skin deep—and only skin deep.” Given Kit ligand's range of activities throughout the body, Kingsley says many such regulatory elements are likely scattered throughout the DNA that surrounds the gene. “We think the genome is littered with switches,” he says. And like the hair color switch, many of the regulatory elements that control Kit ligand and other genes may subtly adjust activity. “A little up or a little down next to key genes–rather than on or off–is enough to produce significant differences. The trick is, which switches have changed to produce which traits? “Despite the challenges, we now clearly have the methods to link traits to particular DNA alterations. I think you will see a lot more of this type of study in the future, leading to a much better understanding of both the molecular basis of human diversity and of the susceptibility or resistance to many common diseases,” Kingsley said.
In this article, we will be providing a brief on Line Charts. These type of charts will display the information as a series of data points which we can call it as “Markers” which are connected by straight line segments. These charts are mostly used to show the data that changes over time. Line Charts helps to understand the relation between two set of values, with one data set always being dependent on the other set. Line Charts and Scatter plots are similar in presenting the data values. The difference between these two formats are, line chart line is created by connecting each individual data point to show the changes so that we can see point to point changes, where as in a scatter chart only individual points are present.
A new study has shown that the evolutionary tree of life and the fossil record are in remarkable agreement. Via Sciencedaily: The researchers studied gaps in the fossil record, so-called ‘ghost ranges’, where the evolutionary tree indicates there should be fossils but where none have yet been found. They mapped these gaps onto the evolutionary tree and calculated statistical probabilities to find the closeness of the match. Dr Wills said: “Gaps in the fossil record can occur for a number of reasons. Only a tiny minority of animals are preserved as fossils because exceptional geological conditions are needed. Other fossils may be difficult to classify because they are incomplete; others just haven’t been found yet. “Pinning down an accurate date for some fossils can also prove difficult. For example, the oldest fossil may be so incomplete that it becomes uncertain as to which group it belongs. This is particularly true with fragments of bones. Our study made allowances for this uncertainty. “We are excited that our data show an almost perfect agreement between the evolutionary tree and the ages of fossils in the rocks. This is because it confirms that the fossil record offers an extremely accurate account of how these amazing animals evolved over time and gives clues as to how mammals and birds evolved from them.”
Diabetes is a disease that occurs when a person’s blood glucose (also called blood sugar) level gets elevated. Blood glucose is important as this is the main source of energy for the cells. Blood glucose levels are regulated by Insulin that is secreted by beta cells in the pancreas. The Elevation of blood glucose can damage the capillaries, can cause nerve damage, stroke, chronic kidney disease, ulcers, and death. Types of diabetes Type 1 – This occurs in in childhood and results from pancreas’s inability to produce enough insulin for regulating blood glucose levels. Regular insulin injections are required to regulate blood glucose. Type 2 – This occurs in adults when cells fail to respond to insulin properly. The most common cause is excessive body weight and lack of physical exercise. Exercise Guidelines For Diabetics - Always consult your doctor before staring any exercise program. - Eat a complex carbohydrate snack before any physical exercise. Always carry a rapid acting simple carbohydrate snack if you are prone to hypoglycemia. - Do not inject insulin in primary muscle groups as it can reduce blood sugar causing hypoglycemia. - Avoid exercising if your blood sugar level is outside the range of (100mg/dl – 300 mg/dl).
Dark matter linked to mass extinctions on Earth and comet impacts says researcher Planet Earth occasionally passes around and through the Milky Way’s disc, which is full of dark matter and could influence the occurrence of geological and biological phenomena, including the extinction of species, such as the dinosaurs. Professor Michael Rampino, a Biology Professor at New York University, wrote in a paper in the journal Monthly Notices of the Royal Astronomical Society (citation below), that Earth’s movement through dark matter may interfere with the orbits of comets and lead to extra heating in the Earth’s core, which could be associated with mass extinction events. The Galactic disc is the plane in which bars, spirals and discs of disc galaxies exist. Galaxy discs typically have more dust and gas, as well as younger stars. Our solar system resides in the Galactic disc. It also has a concentration of dark matter – small subatomic particles which cannot be detected by their emitted radiation but whose presence can be inferred from gravitational effects on visible matter. Michael Rampino concludes that Earth’s path around and through our Galaxy disc region of the Milky Way (above), where our solar system resides, may be directly linked to biological and geological phenomena occurring on Earth. (Image: nyu.edu) According to previous studies, the Earth rotates around the disc-shaped Galaxy once every 250 million years. Along its wavy path, our Solar System weaves through the crowded disc about once every 30 million years. After analyzing the pattern of Earth’s route through the Galactic disc, Prof. Rampino noticed that these disc passages seem to coincide with periods in our history of mass extinctions of life and comet impacts. When the famous comet hit Earth 66 million years ago and led to the extinction of dinosaurs, our Solar System was going through the crowded disc. Why the link? Why is there a correlation between Earth’s passing through the Glactic disc and the impacts and extinctions that follow? Prof. Rampino observed that while traveling through the disc, the dark matter concentrated there disturbs the pathways of comets in the area that are typically orbiting far from Earth in the outer Solar System. Rather than traveling far away from Earth, these comets take unusual paths, with more of them coming in our direction. Prof. Rampino was surprised to find that with each dip through the disc, dark matter appears to accumulate within the Earth’s core. Prof. Rampino believes there is probably a link between concentrations of dark matter and the extinction of the dinosaur. The dark matter particles eventually annihilate each other, but produce considerable heat in the process. This additional heat in the Earth’s core might trigger volcanic eruptions, magnetic field reversals, changes in sea levels, and mountain building – events which show peaks every 30 million years. Therefore, Prof. Rampino suggests that “astrophysical phenomena derived from the Earth’s winding path through the Galactic disc, and the consequent accumulation of dark matter in the planet’s interior, can result in dramatic changes in Earth’s geological and biological activity.” His model, which points to dark matter interactions with Earth as it weaves through the Milky Way, could have a broad impact on how scientists understand the biological and geological development of Earth, as well as other planets in our Galaxy. “We are fortunate enough to live on a planet that is ideal for the development of complex life. But the history of the Earth is punctuated by large scale extinction events, some of which we struggle to explain.” “It may be that dark matter – the nature of which is still unclear but which makes up around a quarter of the universe – holds the answer. As well as being important on the largest scales, dark matter may have a direct influence on life on Earth.” He suggests that in future geologists should consider incorporating his astrophysical findings in order to better understand events that are hitherto thought to result purely from causes inherent to the Earth. Rampino adds that this model similarly provides deeper insight into the possible distribution and behavior of dark matter within the Milky Way. Citation: “Disc dark matter in the Galaxy and potential cycles of extraterrestrial impacts, mass extinctions and geological events,” Michael R. Rampino. Monthly Notices of the Royal Astronomical Society. Published online Feb 18, 2015. DOI: 10.1093/mnras/stu2708.
P.E. Central Lesson Plan: Want Ad: Healthy Eater Purpose of Activity: The purpose of this lesson is for students to understand the differences between healthy and unhealthy eating habits and what steps they need to take in order to make healthy eating choices. Suggested Grade Level: A newspaper with want ads; Construction Paper Description of Idea Begin the lesson with a discussion about eating habits. This may include a discussion of the Food Guide Pyramid, Dietary Guidelines, etc. Have the students give examples of healthy and unhealthy eating habits. Example: Healthy Eating Habit = Eating fruit Unhealthy Eating Habit = Drinking soft drinks NOTE: Using a transparency with some examples that you have generated may be helpful. Next, introduce the concept of a Want Ad. What is a want ad? What does "ad" stand for? Let the students read want ads from actual newspapers. Have the students discuss qualities the companies were looking for in the people they wanted to hire. Students will then write their own 'want ads for a healthy eater'. The want ads should list specific characteristics of a healthy eater. TIP: Encourage students to think of five specific ways they could help themselves become healthy eaters, and then plug those five things into the want ad. EX: Drink a glass of milk with every meal. After the students have made their want ads, ask the students to share their want ads for healthy eaters with the rest of the class. Have the class discuss which characteristics might be the most important in becoming a healthy eater. Give the students a chart in which they will fill in the characteristics they included on their want ads. The students will take the chart home and fill in a smiley face for the days in which they followed through on the healthy eating habits listed. in Lexington , KY . Posted on PEC: 2/18/2001. This lesson plan was provided courtesy of P.E. Central (www.pecentral.org). Products for This Lesson:
Alexander Graham Bell Alexander Graham Bell (March 3, 1847 - August 2, 1922) was a teacher, scientist, and inventor. He was the founder of the Bell Telephone Company. Alexander Graham Bell was born in Edinburgh, Scotland. His family was known for teaching people how to speak English clearly (elocution). Both his grandfather, Alexander Bell, and his father, Alexander Melville Bell, taught elocution. His father wrote often about this and is most known for his invention and writings of Visible Speech. In his writings he explained ways of teaching people who were deaf and unable to speak. It also showed how these people could learn to speak words by watching their lips and reading what other people were saying. Alexander Graham Bell went to the Royal High School of Edinburgh. He graduated at the age of fifteen. At the age of sixteen, he got a job as a student and teacher of elocution and music in Weston House Academy, at Elgin in Morayshire. He spent the next year at the University of Edinburgh. While still in Scotland, he became more interested in the science of sound (acoustics). He hoped to help his deaf mother. From 1866 to 1867, he was a teacher at Somersetshire College in Bath, Somerset. In 1870 when he was 23 years old, he moved with his family to Canada where they settled at Brantford, Ontario. Bell began to study communication machines. He made a piano that could be heard far away by using electricity. In 1871 he went with his father to Montreal, Quebec in Canada, where he took a job teaching about "visible speech". His father was asked to teach about it at a large school for deaf mutes in Boston, Massachusetts, but instead he gave the job to his son. The younger Bell began teaching there in 1872. Alexander Graham Bell soon became famous in the United States for this important work. He published many writings about it in Washington, D.C.. Because of this work, thousands of deaf mutes in America are now able to speak, even though they cannot hear. In 1876, Bell was the first inventor to patent the telephone, and he helped start the Bell Telephone Company with others in July 1877. In 1879, this company joined with the New England Telephone Company to form the National Bell Telephone Company. In 1880, they formed the American Bell Telephone Company, and in 1885, American Telephone and Telegraph Company (AT&T), still a large company today. Along with Thomas Edison, Bell formed the Oriental Telephone Company on January 25, 1881. Inventions[change | change source] Bell's genius is seen in part by the eighteen patents granted in his name alone and the twelve that he shared with others. These included fifteen for the telephone and telegraph, four for the photophone, one for the phonograph, five for aeronautics, four for hydrofoils, and two for a selenium cell. In 1888, he was one of the original members of the National Geographic Society and became its second president. He was given many honors. - The French government gave him the decoration of the Legion of Honor. - The Royal Society of Arts in London awarded him the Albert medal in 1902. - The University of Würzburg, Bavaria, granted him the Degree of Ph.D. Telephone[change | change source] His past experience made him ready to work more with sound and electricity. He began his studies in 1874 with a musical telegraph, in which he used an electric circuit and a magnet to make an iron reed or tongue vibrate. One day, it was found that a reed failed to respond to the current. Mr. Bell desired his assistant, who was at the other end of the line, to pluck the reed, thinking it had stuck to the magnet. His assistant, Thomas Watson complied, and to his surprise, Bell heard the corresponding reed at his end of the line vibrate and sound the same - without any electric current to power it. A few experiments soon showed that his reed had been set in vibration by the changes in the magnetic field that the moving reed produced in the line. This discovery led him to stop using the electric battery current. His idea was that, since the circuit was never broken, all the complex vibrations of speech might be converted into currents, which in turn would reproduce the speech at a distance. Bell, with his assistant, devised a receiver, consisting of a stretched film or drum with a bit of magnetised iron attached to its middle, and free to vibrate in front of the pole of an electromagnet in circuit with the line. This apparatus was completed on June 2, 1875. On July 7th, he instructed his assistant to make a second receiver which could be used with the first, and a few days later they were tried together, at each end of the line, which ran from a room in the inventor's house at Boston to the cellar underneath. Bell, in the room, held one instrument in his hands, while Watson in the cellar listened at the other. The inventor spoke into his instrument, "Do you understand what I say?" and Mr. Watson rushed back into the upstairs and answered "Yes." The first successful two-way telephone call was not made until March 10, 1876 when Bell spoke into his device, "Mr. Watson, come here, I want to see you." and Watson answered back and came into the room to see Bell. The first long distance telephone call was made on August 10, 1876 by Bell from the family home in Brantford, Ontario to his assistant in Paris, Ontario, some 16 km (10 mi.) away. Metal detector[change | change source] Bell is also credited with the invention of an improved metal detector in 1881 that made sounds when it was near metal. The device was quickly put together in an attempt to find the bullet in the body of U.S. President James Garfield. The metal detector worked, but did not find the bullet because of the metal bedframe the President was lying on. Bell gave a full description of his experiments in a paper read before the "American Association for the Advancement of Science" in August, 1882. Opinions[change | change source] Bell was an active supporter of the eugenics movement in the United States. He was the honorary president of the "Second International Congress of Eugenics" held at the American Museum of Natural History in New York in 1921. As a teacher of the deaf, Bell did not want deaf people to teach in schools for the deaf. He was also against the use of sign language. These things mean that he is not appreciated by some deaf people in the present day. References[change | change source] - "Alexander Graham Bell Laboratory Notebook, 1875-1876". World Digital Library. 1875–1876. Retrieved 2013-07-24.
Ethernet cables are the standard cables used for almost all purposes that are often called patch cables or fiber jumper. In an article “How to Choose Ethernet Cable”, we know that Ethernet cables can be categorized into many types, like straight-through and crossover Ethernet cable, UTP or STP, Cat5 or Cat6, etc. But we know little about the pins and wiring in Ethernet cables and RJ45 plugs. RJ-45 conductor cable contains 4 pairs of wires, each consisting of a solid colored wire and a strip of the same color. There are typically two wiring standards for RJ-45 wiring: T-568A and T-568B. What do they mean, and why they are important? This post will discuss the color diagram of straight-through and crossover Ethernet cable to help you figure out. Straight-Through and Crossover Cables Straight-through refers to the Ethernet cables that have the pin assignments on each end of the cable—Pin 1 connector A goes to Pin 1 on connector B, Pin 2 to Pin 2 etc. Straight-through wired cables see in Figure 1 are most commonly used to connect a host to client. For example, the straight-through wired cat5e patch cable is used to connect computers, printers and other network client devices to the router switch or hub (the host device in this instance). While an crossover cables are similar with straight-through cables, except that TX and RX lines are at opposite positions on either end of the cable, in other words, Pin 1 on connector A goes to Pin 3 on connector B. Pin 2 on connector A goes to Pin 6 on connector B ect. Crossover cables are most commonly used to connect two hosts directly. What’s more, the color code diagram of these two cables are different. To create a straight-through cable, you will use either T-568B or T-568A on both ends, while to create a cross-over cable, you will wire T-568A on one end, and T-568B on the other end. T-568B and T-568A Standard T-568A and T-568B are the two wiring standards for RJ-45 connector data cable specified by TIA/EIA-568A wiring standards document. T-568A standard ratified in 1995, was recently replaced by the T-568B standard in 2002. The difference between the two is the position of the orange and green wire pairs. It is preferable to wire to T-568B standards if there is no pre-existing pattern used within a building. Both the T-568A and T-568B standard Straight-through cables are used most often as patch cords for your Ethernet connections. If you require a cable to connect two Ethernet devices directly together without a hub or when you connect two hubs together, you will need to use a Crossover cable instead. Looking at a T-568A UTP Ethernet straight-through cable and an Ethernet crossover cable in the above image with a T-568B end, we see that the TX pins are connected to the corresponding RX pins, plus to plus and minus to minus. You can also see that both the blue and brown wire pairs on pins 4, 5, 7, and 8 are not used in either standard. What you may not realize is that, these same pins 4, 5, 7, and 8 are not used or required in 100BASE-TX as well. So why bother using these wires, well for one thing its simply easier to make a connection with all the wires grouped together. Otherwise you’ll be spending time trying to fit those tiny little wires into each of the corresponding holes in the RJ-45 connector. Ethernet cable color-coded wiring standard allows optical technicians to reliably predict how Ethernet cable is terminated on both ends so they can follow other technicians' work without having to guess or spend time deciphering the function and connections of each wire pair. There is no technical difference between the T568A and T568B wire standard, so neither is superior than the others. FS.COM offers a full range of optical devices including fiber optic cables like LC-SC fiber cable, copper cables (Cat5/5e, Cat6), etc. If you have any requirement of our products, please send your request to us.
1 - General Sclerotinia Information 2 - Sclerotinia in Soybeans 3 - Sclerotinia in Dry Edible Beans 4 - Sclerotinia in Sunflower 5 - Sclerotinia Stem Rot in Canola 6 - Sclerotinia in Lentils 7 - Sclerotinia in Dry Peas 8 - Sclerotinia in Chick Peas There are two other species of Sclerotinia, S. trifoliorum and S. minor. S. trifoliorum is known only on alfalfa and forage legumes in the south east and eastern The primary survival (overwintering) structure ofS. sclerotiorum is the sclerotium. A sclerotium is a hard resting structure consisting of a light colored interior portion called a medulla and an exterior black protective covering called the rind. The rind contains melanin pigments which are highly resistant to degradation, while the medulla consists of fungal cells rich in β-glucans and proteins. The shape and size of sclerotia depend on the host and where they are produced in or on infected plants. What is the origin of sclerotia in a field? There are four primary methods that fields are infested with sclerotia. The most common is by susceptible crops or weeds being infected by ascospores coming from adjacent infested fields. The fungus then produces sclerotia on those plants and some are returned to the soil when the field is harvested. Wind transported soil or crop debris infested with sclerotia are also known to contaminate adjacent fields. Contaminated machinery can introduce sclerotia into a field. Surface irrigation water or rain water moving naturally between fields can also move sclerotia to previously clean fields. Seed contaminated with sclerotia is another method of introducing the fungus into clean fields. The basic disease cycle of Sclerotinia diseases begins with the overwintering of sclerotia in the soil. Sclerotia are conditioned to germinate by the overwintering process. In the growing season, overwintered sclerotia can germinate in one of two methods. Probably the most common is carpogenic germination which results in the production of a small mushroom called an apothecium. Carpogenic germination usually requires the sclerotia to be in wet soil for one to two weeks prior to germination. The apothecium forms spores called ascospores which are ejected into the environment. Most will fall on susceptible plants in the immediate area of the apothecia, but some can travel long distances by wind. Ascospores require free moisture plus a food base such as senescent flower petals or damaged tissue to produce a small colony that can then infect the plant. The pathogen produces oxalic acid and numerous enzymes that break down and degrade plant tissue. The requirement of moisture for carpogenic germination and growth of the pathogen are reasons why rainy periods or irrigation are associated with outbreaks of disease on certain crops. Disease development is favored by moderate temperatures of 15 - 25 C. The other method of germination is myceliogenic, where the sclerotium produces mycelium. A primary crop where myceliogenic germination plays a major role in the disease cycle is in Sclerotinia wilt of sunflower. Sclerotinia wilt is caused by sclerotia germinating and infecting the sunflower roots. Most other Sclerotinia or white mold diseases, such as on dry beans, soybean, canola and sunflower head rot are initiated by carpogenic germination and infection of above ground plants parts by ascospores. How long do sclerotia survive in the soil? Few studies have quantified sclerotia survival in the field. There are many factors affecting survival such as soil type, previous crops, initial population of sclerotia and environmental conditions, but how and to what degree they affect survival is not well understood. High temperature and high soil moisture combined are probably the two most deleterious environmental factors. Microbial degradation, however, is the principal reason for a decline in the populations of sclerotia. There are many fungi, bacteria and other soil organisms that parasitize or utilize sclerotia as carbon sources. One reason that crop rotation is recommended for Sclerotinia is to allow the natural microbial population to degrade sclerotia. Two important fungal parasites involved in the natural degradation of sclerotia are Coniothyrium minitans and Sporidesmium sclerotivorum. Both these fungi have been touted as possible biocontrol agents for sclerotia, and some commercial products are now available. The effect of tillage on survival of sclerotia is poorly studied and no generalizations can be made to aid in management of the pathogen. There is evidence that leaving the sclerotia on the soil surface enhances degradation whereas burying the sclerotia enhances survival. It is thought that the more dramatic changes in temperature and moisture on the soil surface are deleterious to sclerotia. Because of the numerous crops infected by this pathogen, there are many strategies for control. Fungicides have been use with some success such as with dry bean and canola. Crop rotation continues to be used for certain crops such as sunflower where inoculum densities in the soil play a major role in disease development. Host resistance has been an elusive goal of many control programs. Most Sclerotinia diseases are not controlled principally by host resistance. However, some moderate levels of host resistance such as in dry beans and soybean have been found and can aid in integrated control programs. Disease escape mechanisms via plant architecture also have a role in reducing disease. Cultural controls such as wider row spacing or lower plant populations that reduce the microclimate favorable for disease development are used with some crops. Sanitation practices such as with vegetable production, and clean seed programs to keep sclerotia out of seed lots are also useful practices in some crop production systems. Biological control has only recently been tried on a commercial scale, but the results of farmer’s acceptance of this method remains to be determined. Sclerotinia continues to be a very difficult pathogen to control. | 2 3 4 5 6 7 8 Next >>|
This unit builds on work done in third grade with area and fourth grade with liquid volume to build an understanding of volume as a measurement of three-dimensional space. Students use unit cubes to understand that solid volume means “packing” a three-dimensional space with no gaps. This is different than liquid volume where students “fill” a container. Much of the early work in the unit relies on students creating three dimensional rectangular prisms and packing them with centimeter cubes, then counting the cubes. Next, students start to think of the cubes in layers, building the foundation for the formula V=bxh. Finally students look at the dimensions of the rectangular prisms in relation to the total volume to understand the formula V=lxwxh. Complexity is built when students consider what happens to the total volume when one dimension is doubled or halved. Students also build visual spatial reasoning and continue to develop their understanding of volume as an additive quantity when working to find the area of composite rectangular prisms. Videos to help support these concepts are linked below. This unit will address the Common Core State Standards (CCSS) 5.MD.3, 5.MD.4, 5.MD.5 and 5.NBT.2. Students will be able to: - find the volume of rectangular prisms using cubes and nets - find the volume of composite figures - understand why three-dimensional figures are measured in cubic units Volume of Rectangular Prisms Base x Height Length x Width x Height Volume of Composite Rectangular Prisms Decomposing using Unit Cubes Decomposing without Unit Cubes
War broke out in Europe in August of 1914 following the assassination of Archduke Franz Ferdinand and his wife in Sarajevo, the capital of the ex-Turkish province of Bosnia which had been annexed to the Austro-Hungarian Empire in 1908. The Dual Monarchy had in fact taken administrative responsibility for Bosnia-Herzegovina in 1878, therefore participating in the liquidation of the Ottoman Empire from the start. Following the 1908 Bosnian crisis Germany, taken by surprise by the actions of its ally Austria, issued an ultimatum to Russia demanding it accept the fait accompli. Russia styled herself the protector of Slavic interests in the Balkans and was extremely peeved at being unable to put its metal where its mouth was - however, still shaking from the Revolution of 1905, she could do nothing. The Russian government brooded. Austria-Hungary, meanwhile, had problems of its own. The Dual Monarchy was comprised of people of many different nationalities, many of whom resented been inside the borders of the Austro-Hungarian state. The existence of strong independent states on its borders - Serbia, for instance - acted as focal points for the national aspirations of the minorities in the Empire. The Bosnian crisis only made this worse. The Austro-Hungarian government increasingly believed that the only way they could secure the existence of their own state was through the destruction of the Serbian state. This was in a way a mad last gamble - they knew war would destabilise the state further, but feared it would simply implode anyway given enough time. The assassination of Archduke Ferdinand was the perfect opportunity to strike at Serbia. The issue of who is to blame for the war was and is contentious. Lenin blamed international capitalism, Hitler blamed international Jewry, and the belligerent countries blamed everyone but themselves. It is probably Germany that holds the most blame for the escalation of the war from what might have been a local one to a general, European war. This did not happen in 1908, and by looking at why, we can understand the forces at play. Why did a European War develop from the Serbian crisis of 1914, but not from the Bosnian crisis of 1908? War broke out in 1914 because, in contrast to 1908, the main antagonists concluded that this war had to be fought. There were numerous factors that changed or came to their head between 1908 and 1914 that made the various Great Powers conclude this. Each had its own pressing domestic issues and goals in foreign policy that it was believed a war would solve. In the case of Russia there was a large degree of fatalism in the decision – with a decade of failure in foreign policy behind them, the Russians were not willing to abdicate their position as a Great Power once and for all by being impotent in the face of aggression. The role of Austria-Hungary and Germany was more active – they sought to change the international status quo to one more favourable for themselves. Intransigence characterised the actions of all these countries. It was clear that Austria-Hungary intended to "deal with" Serbia once and for all, an action that Russia could not allow. After their turning away from Asia following the 1905 defeat, and the subsequent weakness they had shown in the Balkans, they were well on their way to becoming insignificant in international affairs. For the self-styled protectors of the Slavs, it was not conceivable to let the "Germanic" races inflict such an insult on them. Germany, meanwhile, chose the occasion of this local conflict to fight the general one it believed to be necessary for a reapportionment of European power. The annexation of Bosnia-Herzegovina brought the issue of the South Slavs living in the Dual Monarchy even more clearly into focus, and did not quell the domestic problems they caused the Austro-Hungarian state. They had brought another million disillusioned Slavs under their rule, and did very little to help the economy of the impoverished provinces. It was believed Serbia could exercise a dangerous pull on these people and inspire them to acts of terrorism against the Austro-Hungarian state. In 1908 the policy of the Austrian foreign ministry was not to actively seek war – rather, they handed Russia a fait accompli and let them make the decision. But by 1914 the personnel in the department had changed, and they had inherited the former's belief in an expansionist policy but none of his caution. This lack of caution stemmed from the belief that a pre-emptive strike was needed against the Serbian state to secure the existence of their own. After the increase in Serb and Croat nationalism after 1908 and the failure of economic pressure to bring Serbia to its knees, it was decided more decisive action was needed. The other consequence of the 1908 crisis had been to make the implications of the Austro-German treaty of 1879 clearer, especially to the Germans. Although it had initially been planned by Bismarck as a way of restraining Austria-Hungary and stopping it dragging Germany into a Balkans conflict, Austria-Hungary had in this instance acted of its own volition and independently of Germany. But Germany still rallied to its side, partly because Austria was the only friend they had left inside a ring of countries that seemed increasingly hostile. 1908 was an inopportune moment for the Reich government, but 1914 was more favourable. The view current in German diplomatic circles was that a war would have to be fought against France and Russia eventually if Germany was going to achieve its goal of becoming a Weltmacht (World power). Similarly, a duel with England was inevitable as the German goal of becoming a colonial power could not be achieved but at the expense of the British Empire. Although the two powers managed to reach agreement on specific colonial issues, it was the general principle that was irresolvable – conflict was seen as, sooner or later, inevitable. The advantage of 1914 over 1908 to the Germans was that Russia could be painted more convincingly as the aggressor, which was necessary both for domestic and international reasons. Strategically, this also seemed like the right time to start the war. Germany was more prepared than she had been in 1908, and although this was also true of Russia it was thought that the latter would get stronger over time. The Russians had made a remarkable recovery from the shock of 1905 and with the help of French capital they were continuing to build on it – the strategic railroads in Poland, essential for modern warfare, were also nearly complete. That the Germans were concerned primarily with their own conflict against Russia, to the detriment of the Austrian goals in the Balkans, is shown by their instructions to the Austrians when war broke out. They were told to strengthen Galicia against Russian offensive so that the Schlieffen Plan could be put into action, and furthermore that "in this gigantic power struggle on which we are embarking shoulder to shoulder, Serbia plays a quite subordinate role". Those in the German government that wished for a reapportionment of European power in their favour had various reasons. The naval arms build-up that Tirpitz began brought together a number of interests in German society that served both domestic and foreign policy concerns. International concerns addressed both the balance of power on the European Continent and in the World at large. On the Continental front, it was recognised that in a war Britain could blockade the German seaboard and starve them into defeat. In Weltpolitik (World politics), a strong Navy was believed necessary to force Britain to give "fair play" to Germany in Imperial questions. This challenge to British hegemony could not be taken lightly by the Brits, and since it was begun a confrontation was looming. But the Germans were unwilling to abandon the Tirpitz program because it was believed to be acting as such a unifying force in German society, as well as fulfilling foreign policy goals. The Reich government wanted to give the German nation a unifying force to rally behind – some national goal to lure the people away from the Social Revolutionaries. In 1907 the program was accelerated still faster as the fortress mentality of the Reich grew – a fact which had not a little to do with the containment policy of Britain, which had just finalised the Triple Entente agreement. Fortified by the increased tempo of its armaments production and believing itself surrounded by hostile powers, Germany's intransigence was assured by 1914. The Austrians had wanted Germany to head off the Russian threat while they sorted out their problems in the Balkans. This had worked in 1909, when the Russians had capitulated after Germany’s ultimatum. Russian society and its military were still recovering from the defeat of 1904 - 05 in the Russo-Japanese war and the subsequent Revolution, and the state was militarily unprepared for war in 1909. It was also feared that a war would lead to Revolution and the end of Tsarist rule. However, between then and the Serbian crisis, Russian capabilities and attitudes changed. Russia had rearmed and could now contemplate fighting a war against the Central Powers, however undesirable the consequences might be for society. More important was the diplomatic failures that had dogged Russia ever since 1905, especially in the Balkans. The Bosnian crisis was in itself a turning point, as the Russians had not stood by their ally when the Germans had. They had subsequently failed to support Serbia in its goal of acquiring an Adriatic port, refused to support Montenegro’s demand for Scutari and failed to threaten war over the Von Sanders mission to command an army in Constantinople as late as January 1914. The national press, not necessarily a deciding factor in decisions, but an important one for testing the social mood, was angry over Russian capitulation. As well as the desire to make up for these past weaknesses, the range of options available was narrower in July 1914. Austria-Hungary’s goals were much more comprehensive than ever before, and Russia would be abdicating its role as a Great Power if it failed to intervene. This concept of prestige is important for understanding the reasons statesmen went to war in 1914, apparently against economic rationality. A more practical reason is that France was willing to support Russia in 1914, which it had not been in 1908. French support of Russia was important to the Russians, and British support of France was important to the French. Both Britain and France entered the war for primarily negative reasons – if they had not done, then their national security would have been threatened in both the long and short term by a dominant Germany. But both nations could only enter the war if they managed to convince their respective populations that the nature of the actions was defensive, so that they could maintain national unity. Unlike Germany, neither of these countries sought war to advance their own interests. Nor did France go to war over Serbia or Britain go to war over Belgian neutrality. They went to war because German hubris had reached a new level and because, in the long run, it threatened them both. Britain, never a huge land power on the Continent, feared that the global balance of power would tip in Germany's favour. France had a long history of problems with Germany and didn't want Germany to become more powerful and to cause it bigger problems in the future. The difference between 1914 and 1908 was that the Central Powers were now pursuing their goals to restructure the international situation actively and aggressively. The confrontation had come and would have to be faced down. In the final analysis, it was the different capabilities and goals of the Great Powers in 1914 as opposed to 1908 that made war happen at that particular point. 1908 and the years since had been full of instances that proved that the goals of the Central Powers and of Russia in the Balkans were mutually exclusive. Austria-Hungary had become convinced that the gains Serbia made in the liquidation of the Ottoman Empire's European domains made it a threat whose existence could not be tolerated. Meanwhile, Russia could not afford to allow itself to be relegated to the status of a third-rate power which failed to act as the protector of Slavs that it portrayed itself as. German militarism and belief in Weltpolitik had reached new heights as their capabilities had increased. Many of these factors had a root in the domestic politics of the countries concerned, especially the need by the conservative regimes involved – Russia, Austria-Hungary and Germany – to stabilise their fragile political systems. A short, successful war was sought to change the international status quo in their favour and shore up support at home by achieving national unity through war. Retrospectively, this does not appear to have been a very rational decision, and we must acknowledge the role that emotion and sentiment played when statesmen considered their actions. But it is equally important to recognise that, helped along by propaganda portraying the enemy as the aggressor, such policies had a chance of achieving national unity. In 1914 the Austrians, Germans and Russians believed that the moment was opportune to do this and, further, to change the international situation in their favour. J. Joll, The Origins of the First World War (1984) M. Trachtenberg, "The Meaning of Mobilisation in 1914", International Security 15/3 (1991) V.R Berghahn, Germany and the Approach of War in 1914 (2nd ed., 1993) K. Wilson, Decisions for War, 1914 (1995) Samuel R. Williamson, Austria-Hungary and the Origins of the First World War (1991) D.C.B Lieven, Russia and the Origins of the First World War (1983) L.C.F Turner, "The Russian Mobilisation in 1914" in P.M Kennedy (ed.), The War Plans of the Great Powers 1880 – 1914 (1985) R.J.W Evans and H. Pegge-von Strandmann (eds), The Coming of the First World War (1988) I survey the historiography of the origins of the war in my node in World War I. European tribal nationalism may also be of interest for an understanding of Panslav and Neoslav ideology, as well as their Teutonic equivelents. Many, many books have been written with the title of this node. James Joll's of 1984 is particularly salient and a good overview.
Absolutism In England World History: (17th-19th century) Name/School: Carol M. Conti, Blackstone-Millville Regional High School Grade Level: 9 Topic: Absolutism in England Lesson: England: From Absolutism to Limited/Constitutional Monarchy Overview: Using objects from the Historic Deerfield collection to highlight England’s move from Absolutism to Limited/Constitutional Monarchy. Time: one class period (45 minutes) - Student’s notes on English monarchs, 1603-1714 - Overhead projector - Transparency of 18th century Delftware bowls, (Images courtesy of Historic Deerfield) - Punch Bowl containing Queen Anne’s Image – HD76.006 - Punch Bowl containing inscription of George & Sarah Jenings – HD57.109 - Concepts (Big idea/central theme): Using objects from the Historic Deerfield collection to highlight England’s move from Absolutism to limited/constitutional monarchy. - Content (What students should know): Identify how Delftware bowls created in England reflect both a move to limited/constitutional monarchy and a growing sense of individualism. - Skills (What students should be able to do): Develop skills necessary for object study and recognize historical significance of objects. - In the previous class, students will have analyzed the factors that led England to move from Absolutism in the 17th-century to a limited monarchy by the18th-century by examining how factors either strengthened or weakened the power of the monarch. In addition, students will have experienced a prior lesson in which they have analyzed objects as a part of historical understanding. - Opening activity (homework review): list three facts about Delftware; list two places in which Delftware was made, list one reason why Delftware was popular in the colonies. Use this activity as a means to review the homework assignment, which provided students with background about the Delftware industry. - Divide the class in half. Group A will be given a picture of the Delftware bowl containing Queen Anne’s image. Group B will be given a picture of the Delftware bowl containing the names of a British couple who were to be married. Both groups should answer the guiding questions. - A. What is this object? - B. What would this object have been used for? - C. What image(s) is contained within the object? - D. Both of these objects were created in the 18th century in Britain. Approximately when do you think this object was created? - Pair up students; one from group A and one from group B. Students should share their images and their answers to the guiding questions. Working with their partner, students should use that information as well as their prior knowledge to determine the following: How can these objects be used to support the idea that England moved from Absolutism to limited/constitutional monarchy? How do these objects reflect a growing sense of individualism in England? - This lesson is part of a larger unit on Absolutism. For the unit evaluation project, students, working with a partner or individually, create a guide to absolute rule (based on the popular “Idiots/Dummies Guides” in which they demonstrate the steps one needs to obtain and hold absolute rule, as well as the steps one needs to avoid. As the final project must include text and images, this concept should be incorporated in the guide. - There would also be a quiz following the completion of this segment of the unit. This quiz would include an evaluate question relating to this lesson. Extension Possibilities/Interdisciplinary Connections: - Students could create their own “Delftware” that reflects their sense of individualism or an idea important to them. Tips and Reflections from Author - I hope to give each student a copy of the image; however, given budget restraints I may have to put the image of an overhead. History/Social Science Curriculum Frameworks Learning Standards: WHII.2 – Explain why England was the main exception to the growth of absolutism in royal power in Europe.
Stream and Download In this interactive, students use logic and mathematical skill to place animals at the correct points on a Cartesian graph representing the cardinal directions. Then, they use the Pythagorean theorem to determine the distances between points. The riddles in the interactive, including one requiring an understanding of rate, have randomized values so that students can place points at different locations and calculate different distances. The activity provides a review of concepts related to determining the distances between points on a Cartesian graph using the Pythagorean theorem and a response sheet to help students work with the interactive. This resource is part of the Math at the Core: Middle School Collection.
August 9, 2012 Our Early Human Ancestors – Hominins – Had Varying Diet Preferences April Flowers for redOrbit.com - Your Universe Online An international team of scientists has reconstructed the dietary preferences of 3 groups of hominins found in South Africa.The paper, “Evidence for diet but not landscape use in South African early hominins," is a joint effort between the Ecole Normale Supérieure, the Université de Toulouse Paul Sabatier, and the University of the Witwatersrand and has been selected for Advanced Online Publication in the journal Nature. The research sheds more light on the diet and home ranges of the early hominins belonging to three different genera, notably Australopithecus, Paranthropus and Homo, that were discovered at sites such as Sterkfontein, Swartkrans, and Kromdraai in the Cradle of Humankind. The Cradle of Humankind, about 50 kilometers northwest of Johannesburg, South Africa, has produced a large number of hominid fossils, as well as some of the oldest ever found. Hominin is a fairly new designation, a bit more specific than the older term Hominid. Hominids now include all modern and extinct Great Apes (that is modern humans, chimpanzees, gorillas, and orangutans and all their immediate ancestors), while the newer term hominin consists of modern humans, extinct human species, and all our immediate ancestors (including members of the genera Homo, Australopithecus, Paranthropus and Ardipithecus). The scientists conducted an analysis of the fossil teeth. Signature elements of chemical elements have been found in trace amounts in the tooth enamel of the three fossils genera, and the results are indicators of what South African hominins ate and what their habitat preferences were. Strontium and barium levels in organic tissues, including teeth, decrease in animals higher in the food chain. The scientists used a laser ablation device, which allowed them to sample very small quantities of fossil material for analysis. Since the laser beam was pointed along the growth prisms of dental enamel, it was possible to reconstruct the dietary changes for each hominin individual. Results of the study indicated that Australopithecus (a predecessor of early Homo who existed before the other two genera evolved about 2 million years ago), had a more varied diet than early Homo. Its diet was almost more variable than that of another distant human relative known as Paranthropus. According to the team, Parathropus had a primarily herbivorous-like diet, while Homo included a greater consumption of meat. Australopithecus probably ate both meat and the leaves and fruits of woody plants. The composition of this diet may have varied seasonally. Francis Thackery, Director of the Institute for Human Evolution at Wits University states that the greater consumption of meat in the diet of early forms of Homo could have contributed to the increase in brain size in this genus. Though their dietary habits differed, the results of the study show that all three groups had similar-sized home ranges. The scientists have also measured the strontium isotope composition of dental enamel. Strontium isotope compositions are free of dietary effects but are characteristic of the geological substrate on which the animals lived. According to the results all the hominids lived in the same general area, not far from the caves where their bones and teeth are found today. Professor Vincent Balter of the Geological Laboratory of Lyon in France, suggests that up until two millions years ago in South Africa, the Australopithecines were generalists, but gave up their broad niche to Paranthropus and Homo, both being more specialized than their common ancestor.
Using the Unit Circle to Prove the Law of Sines for Obtuse Triangles Lesson 3 of 5 Objective: SWBAT connect the trigonometric ratios with the unit circle Up until this point, my students still have the idea of an angle as two rays with a common vertex. So this is a major conceptual shift. I start by providing some context with physics. I explain how in physics there are two types of motion: linear motion and angular motion. Linear motion, I explain, is motion in straight lines. Angular motion, on the other hand, is motion along a circular arc. So for example when a track athlete is running the 100 meter dash, we say they are running at a velocity of 10 meters per second, for example. But if an ice skater is spinning during a routine, we might say that she is rotating with an angular velocity of 700 degrees per second. After this brief anecdote, I will give a definition of an angle measure as a distance along a circular arc. To bring this definition to life, I show students a Geometer's Sketchpad sketch that you can see in the following video. Earlier in the course, I've taught a lesson on the Law of Sines. In that lesson, we only proved the law for acute triangles because the proof for obtuse triangles relied on students having some knowledge of the unit circle. So at the time, I promised my students that later in the year, I would come back and teach them about the unit circle so that we could prove the law of sines for obtuse triangles. Hence, this lesson. I start by providing a basic introduction to the unit circle and I develop the identity sin (180-x) = sin x. Finally students prove the law of sines for obtuse angles. All of that takes place using the Law of Sines for Obtuse Triangles handout as a medium. I do some direct teaching on the unit circle and trigonometry thereof. I also give a pretty direct explanation of the sin(180-x) = sin x relationship. I just want students to know this information and understand the concepts involved. Once students get to the last two pages of the handout, they will be working independently to apply what they learned earlier and what they have learned in this section of the lesson.
Domain of a function To understand what the domain of a function is, it is important to understand what an ordered pair is. An ordered pair is a pair of numbers inside parentheses such as (5, 6). Generally speaking you can write (x , y) x is called x-coordinate and y is called y-coordinate If you have more than one ordered pair, you name this situation set of ordered pairs or relation Basically, the domain of a function are the first coordinates (x-coordinates) of a set of ordered pairs or relation. For example, take a look at the following relation or set of ordered pairs. ( 1, 2), ( 2, 4), (3, 6), ( 4, 8), ( 5,10), (6, 12), (7,14) The domain is 1, 2, 3, 4, 5, 6, 7. We will not focus on the range too much here. This lesson is about the domain of a function. However, the range are the second coordinates or 2, 4, 6, 8, 10, 12, 14 Let's say you have a business (selling books) and your business follows the following model: Sell 3 books, make 12 dollars. (3, 12) Sell 4 books, make 16 dollars. (4, 16) Sell 5 books, make 20 dollars. (5, 20) Sell 6 books, make 24 dollars. (6, 24) The domain of your business is 3, 4, 5, and 6. Pretend now that you can sell unlimited books. (3, 4, 5, 6, 7, ........). Your domain in this case will be all whole numbers You may then need a more convenient way to represent your business situation A close observation at your business model and you will be able to see that the y-coordinate equals x-coordinate × 4 y = 4x You can write (x, 4x). In this case, the domain is x and x represents all whole numbers or your entire domain for this situation. In reality, it makes more sense for you to sell unlimited books. Thus, when the domain is only 3, 4, 5, and 6, we call this type of domain restricted domain, since you restrict yourself only to a portion of your entire domain In some cases, some value(s) must be excluded from your domain in order for things to make sense Consider for instance all integers and their inverses as shown below with ordered pairs ...,(-4, 1/-4), (-3, 1/-3), (-2, 1/-2), (-1, 1/-1), (0, 1/0),(1, 1/1) (2, 1/2), (3, 1/3), (4, 1/4), ... One of these these domain values will not make sense. Do you know which one? It is 0. If the domain is 0, then 1/0 does not make sense since 1/0 is not defined or has no answer Instead of writing all these ordered pairs, you could just write (x, 1/x) and say that the domain of definition is x such that x is not equal to 0 In general, the domain of definition of any rational expressions is any number except those that will make the denominator equal to 0 What is the domain of (6x + 7)/ x - 5 The denominator equals to 0 when x - 5 = 0 or x = 5 The domain for this rational expression is any number except 5 What is the domain of (-x + 5) / x2 The denominator equals to 0 when x2 + 4 = 0 + 4 is never equals to 0. Why? Because x2 is always positive no matter what number you replace x with = 16 and 16 is positive. 16 + 4 is still positive = 25 and 25 is positive. 25 + 4 is still positive However, if you change the denominator to x2 - 4, the denominator will be 0 for some numbers - 4 = 0 when x = -2 and x = 2 - 4 = 2 × 2 - 4 = 4 - 4 = 0 - 4 = 2 × 2 - 4 = 4 - 4 = 0 The domain will be in this case any number except 2 and -2 Consider now all integers and their square roots as shown below with ordered pairs ...,(-4, √-4), (-3, √-3), (-2, √-2), (-1, √-1), (0, √0),(1, √1) (2, √2), (3, √3), (4, √4), ... Many of these domain values will not make sense. Do you know which ones? They are -4, -3, -2, and -1. For any of these domain values, the square root does not exist. At least it does not exist for real numbers. It does exist for complex numbers, but this is a completely different story that we will not consider here Our asumption here is that we are working with real numbers only to look for the domain of a function and the square root does not exist for real numbers that are negative! Instead of writing all these ordered pairs, you could just write (x, √x) and say that the domain of definition is x such that x is bigger or equal to 0 What is the domain of √ (x - 5)? When you deal with square roots, the number under the square root sign is called a radicand √ (x - 5) is defined when the radicand x - 5 is bigger or equal to 0 x - 5 ≥ 0 x - 5 + 5 ≥ 0 + 5 x ≥ 5 The domain of definition is at least 5 or any number bigger or equal to 5 As you can see here, the domain of a function does not always make sense for some value(s). It is your job to find these values when you look for the domain of a function Jun 08, 17 01:52 PM Learn quickly how to multiply using partial products New math lessons Your email is safe with us. We will only use it to inform you about new math lessons.
There are three main points Karl Marx describes in "The Commodity", which is Chapter One of "Das Kapital". He describes basic principles of value and exchange after defining the term commodity. Marx uses the term commodity to mean an external object which satisfies either directly or indirectly a human need. The commodity can be a literal object such as corn or iron. It can also be an abstract item such as art which satisfies the human condition. Marx introduces the idea of value through an examination of three types of value. The first is the "use-value" of an object. This is how useful the commodity is to society. A commodity by itself has no inherent use-value, unless someone wants it. Therefore, the use-value of a commodity is tied directly to its demand. The use-value of a commodity will differ among people, but society will eventually decide a generic use-value based on demand prevalence. Use-value is also connected to "exchange-value", which is the amount of one commodity needed to trade for another commodity. In theory, a certain amount of corn will equal an amount of iron in an equal trade. This is the exchange value. It will also fluctuate according to demand and the use-value of each commodity. In order to properly establish the exchange-value, each commodity must share some other aspect so a fair equation can be calculated. This aspect is the third point and is described as "value". Value, according to Marx, represents the effort to produce the commodity. Value will increase as effort to produce increases. The inverse is also true. Value is not a set aspect of production but depends on demand. A commodity without demand is useless and therefore it has no value.
Pompe disease is a rare (estimated at 1 in every 40,000 births), inherited and often fatal disorder that disables the heart and muscles. It is caused by mutations in a gene that makes an enzyme called alpha-glucosidase (GAA). Normally, the body uses GAA to break down glycogen, a stored form of sugar used for energy. But in Pompe disease, mutations in the GAA gene reduce or completely eliminate this essential enzyme. Excessive amounts of glycogen accumulate everywhere in the body, but the cells of the heart and skeletal muscles are the most seriously affected. Researchers have identified up to 70 different mutations in the GAA gene that cause the symptoms of Pompe disease, which can vary widely in terms of age of onset and severity. The severity of the disease and the age of onset are related to the degree of enzyme deficiency. Early onset (or infantile Pompe disease is the result of complete or near complete deficiency of GAA. Symptoms begin in the first months of life, with feeding problems, poor weight gain, muscle weakness, floppiness, and head lag. Respiratory difficulties are often complicated by lung infections. The heart is grossly enlarged. More than half of all infants with Pompe disease also have enlarged tongues. Most babies with Pompe disease die from cardiac or respiratory complications before their first birthday. Late onset (or juvenile/adult) Pompe disease is the result of a partial deficiency of GAA. The onset can be as early as the first decade of childhood or as late as the sixth decade of adulthood. The primary symptom is muscle weakness progressing to respiratory weakness and death from respiratory failure after a course lasting several years. The heart may be involved but it will not be grossly enlarged. A diagnosis of Pompe disease can be confirmed by screening for the common genetic mutations or measuring the level of GAA enzyme activity in a blood sample -- a test that has 100 percent accuracy. Once Pompe disease is diagnosed, testing of all family members and consultation with a professional geneticist is recommended. Carriers are most reliably identified via genetic mutation analysis.
An IP address is an identifying token used to route communications over networks supporting the Internet Protocol. Most internal networks as well as the Internet use the Internet Protocol. A port is an additional hierarchical level of identification that allows two software applications to communicate with one another. When a connection or FORWARDER is created, an IP address and port number must be supplied. The address must uniquely identify a machine running the message server and the port specified must be the port the server is configured to listen on for incoming connection requests. The IP address can be entered in any format recognized by the underlying network system. TCP/IP is the mechanism used to establish connections, so machine addresses can always be entered in dotted quad format, i.e. xxx.xxx.xxx.xxx. Typically, however, the domain name is used instead, leaving the conversion to the operating system and its minions, e.g. www.DiamondSierra.com. For connections across LANs it is usually sufficient to simply specify the computer name, again leaving the operating system to resolve the mapping from machine name to IP address. Note that the address “127.0.0.1” will always resolve to the local machine. Workspaces are created new with a connection named ‘Local’ and the address for that connection is specified as “127.0.0.1”. The pseudoname “localhost” is also usually configured to resolve to the local machine as well. The port number may be any integer between 1 and 32,767, but must be the one the target message server is configured to listen on. It is recommended that the same port value be used for all servers unless there is a conflict. Port numbers less than 1024 may be reserved for use by other applications.
T'ang (täng) [key], dynasty of China that ruled from 618 to 907. It was founded by Li Yuan and his son Li Shih-min, with the aid of Turkish allies. The early strength of the T'ang was built directly upon the excellent system of communications and administration established by the Sui. At first the neighboring peoples, nomadic and civilized, were held in check, and by the mid-7th cent. the T'ang occupied or controlled large portions of Manchuria, Mongolia, Tibet, and Turkistan. During the T'ang China was open to foreign ideas and developed trade with neighboring countries and Central Asia. While the introduction of foreign music and dances enriched the T'ang culture, the Chinese Confucian culture and administrative system had profound influence in Korea, Japan, and Vietnam. Sculpture flourished (T'ang horses are especially noted) and the painting (of which few examples have survived) is considered superior. In literature poetry was the most highly developed form; Li Po (701–62), Tu Fu (712–70), and Po Chu-I (772–846) were the most distinguished poets. The classics of Confucianism were closely studied and provided the basis for the civil-service examinations that were to assume great importance later (see Chinese examination system). Although religious toleration was usually practiced, foreign cults were sometimes proscribed; Buddhism was suppressed in the Hu-chiang period, and many Buddhist monasteries were dissolved, at great profit to the state treasury. The high-water mark of territorial expansion and political unity was reached during the reign of Emperor Hsuan Tsung (712–56). Defeat by the Arabs at the Talas River in W Turkistan (751) checked T'ang ambitions in the west, and the costly struggle against the An Lu-shan rebellion (755–63) finally exhausted the empire. Warlord governors turned many provinces into autonomous personal domains. The vigor of the early T'ang administration quickly declined, and control over border regions was lost, especially to the Uigurs, who became dominant in Mongolia. In the 9th cent. local maladministration became widespread, and revolts broke out in the south and in Tibet. After the T'ang collapse there was great disorder until the establishment of the Sung dynasty in 960. See E. G. Pulleyblank, The Background of the Rebellion of An Lu-shan (1955); E. O. Reischauer, Ennin's Travels in T'ang China (1955); A. F. Wright and P. C. Twitchett, ed., Perspectives on the T'ang (1973); D. Twitchett, The Cambridge History of China (Vol. 3, 1979); H. J. Wechsler, Offerings of Jade and Silk: Ritual and Symbol in the Legitimation of the T'ang Dynasty (1985); C. Hartman, Han Yu and the T'ang Search for Unity (1986). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Anticipatory grief refers to a grief reaction that occurs in anticipation of an impending death. While this term is usually used in connection with spouses, other people and even dying people can experience anticipatory grief themselves. This emotional phenomenon can include some of the usual symptoms of grief, including depression, and extreme concern for or attention to the dying person, but recovery and acceptance do not occur. It does not speed the grieving period after death. Anticipatory grief is the subject of controversy. To the extent that grief requires loss by definition, anticipatory grief primarily is mourning over the immediate loss of everyday security and normalcy, instead of the future loss of the terminally ill person. Anticipatory grief does not always occur, and may be quite rare. While many people use this term very loosely, typical feelings of sadness or being overwhelmed by the needs of a terminally ill person are not usually accepted as an actual grief reaction by researchers. Some people believe that the anticipation of loss frequently intensifies attachment to the person. - ↑ Definition of Anticipatory grief - ↑ 2.0 2.1 2.2 2.3 2.4 Anticipatory grief, National Cancer Institute |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Children with Autism Spectrum Disorder (ASD) often have difficulties with communication and reciprocal social interaction skills (for example difficulties in being able to have a friend or engaging in interactive conversations). In her previous research, Professor Cassell developed a technology called ‘Virtual Peers' - life-size 3D animated characters that look like children and are capable of interacting, sharing real toys, and responding to children's input. For typically developing children, Professor Cassell showed that ‘Virtual Peers' can help increase children's emerging literacy and social behaviors and real life social interactions. So under Professor Cassell's guidance, Dr Leventhal plans to design and evaluate a computer system that allows children with ASD to interact with a life-size ‘Authorable Virtual Peer'. This computer system will test children with ASD to determine whether engaging them on narrative task with a ‘virtual partner' (whom they themselves can program) can, for example, help practice taking turns, and whether the children through creating and controlling how the virtual peer communicates, will develop a better understanding of putting together their own communications and reciprocal social interactions. What this means to individuals with autism: The study may provide important information about the underlying mechanisms of communication and social reciprocity in ASD and provide an innovative intervention.
Heat engines require cooling, to turn heat energy into mechanical energy (and then, via a turbine-connected generator, to electrical energy). This is an unavoidable physical principle, and is typically exploited via the Carnot cycle. Usually, this cooling requirement uses water. Why do I raise this point? Because it seems to be a source of much confusion (innocent and deliberate) amongst the energy illiterate, especially when mounted as an argument against nuclear energy generation (and, implicitly, as a reason for adopting renewable energy). For instance, Friends of the Earth have decried: Nuclear power plants consume large amounts of water –35-65 million litres daily. Indeed nuclear power is the thirstiest of all energy sources. A December 2006 report by the Commonwealth Department of Parliamentary Services states: “Per megawatt existing nuclear power stations use and consume more water than power stations using other fuel sources. Depending on the cooling technology utilised, the water requirements for a nuclear power station can vary between 20 to 83 per cent more than for other power stations.” Global warming and water shortages are likely to exacerbate problems experienced by the nuclear power industry during heatwaves in recent years. Nuclear power plants in several countries, including France and the US, have had to operate at reduced capacity, or to shut down temporarily, because of reduced water supply or to avoid breaching regulations limiting the heat of expelled water. So what’s the story? Are water limitations and discharge regulations destined to be a major limiting factor for nuclear power, especially for places that are experiences increasing water shortages, such as Australia? The short answer is no — this is classic FUD. For the longer answer, read on. All thermal power plants, by definition, make use of heat engines with heat exchangers, and so require cooling (although this need can be reduced in various ways, as explained below). This includes coal-fired, nuclear fission, oil-fired, conventional gas-fired, solar thermal and geothermal power stations. The renewable energy sources that don’t have this cooling requirement are hydropower, wind, wave, tidal and solar photovoltaic power. Water is used in two ways in thermal power plants: (a) Internal steam cycle: to create steam via the energy source (fossil fuel combustion, fission chain reaction, heat exchange with deep rocks [hot dry rock geothermal] or a heat transfer fluid [concentrating solar power]) and convey it to an electricity-generating turbine, and (b) Cooling cycle: to cool and condense the after-turbine steam (this condensation dramatically decreases the volume of the expanded steam,creating a suction vacuum which draws it through the turbine blades), and then to discharge surplus heat to the environment.
The sod house or "soddy" was a corollary to the log cabin during frontier settlement of Canada and the United States. The prairie lacked standard building materials such as wood or stone; however, sod from thickly-rooted prairie grass was abundant. Prairie grass had a much thicker, tougher root structure than modern landscaping grass. Construction of a sod house involved cutting patches of sod in rectangles, often 2'×1'×6" (600×300×150 mm) long, and piling them into walls. Builders employed a variety of roofing methods. Sod houses accommodate normal doors and windows. The resulting structure was a well-insulated but damp dwelling that was very inexpensive. Sod houses required frequent maintenance and were vulnerable to rain damage. Stucco or wood panels often protected the outer walls. Canvas or plaster often lined the interior walls. L'Anse aux Meadows, the site of the pioneering 10th-11th century CE Norse settlement near the northern tip of Newfoundland, has reconstructions of eight sod houses in their original locations, used for various purposes when built by Norse settlers there a millennium ago
May also be called: Hemophilia B Hemophilia is a disease that prevents blood from clotting properly. A clot helps stop bleeding after a cut or injury. In factor IX deficiency (hemophilia B), the body doesn't make enough factor IX (factor 9), one of the substances the body needs to form a clot. When most people get a cut, the body naturally protects itself. Sticky cells in the blood called platelets go to where the bleeding is and plug the hole. This is the first step in the clotting process. When the platelets plug the hole, they release chemicals that attract more sticky platelets and also activate various proteins in the blood known as clotting factors. These proteins mix with the platelets to form fibers, and these fibers make the clot stronger and stop the bleeding. Our bodies have 12 clotting factors that work together in this process (numbered using Roman numerals from I through XII). Having too little of factors VIII (8) or IX (9) is what causes hemophilia. A person with hemophilia will lack only one factor, either factor VIII or factor IX, but not both. There are two major kinds of hemophilia: hemophilia A, which is a factor VIII deficiency; and hemophilia B, which is a factor IX deficiency. People with hemophilia may bruise and bleed easily, and they may bleed a lot or for a long time after an injury. Bleeding can occur anywhere in the body, such as into the joints, muscles, or digestive tract. Some people have mild disease, some moderate, and some more severe. Hemophilia is a genetic disorder, which means it's the result of a change in genes that was either inherited (passed on from parent to child) or occurred during development in the womb. Hemophilia almost always occurs in males, but in rare cases can affect females. Doctors diagnose hemophilia by performing blood tests. Although the disease can't be cured (except by a liver transplant — which sometimes can cause health problems worse than hemophilia itself), it can be managed. Patients with more serious cases of hemophilia often get regular shots of the factor that they're missing — known as clotting factor replacement therapy — to prevent bleeding episodes. The clotting factors are transfused through an intravenous (IV) line, and can be given in the hospital, at the doctor's office, or at home. All A to Z dictionary entries are regularly reviewed by KidsHealth medical experts. |Pocket Doc Mobile App| |Maps and Locations (Mobile)| |Programs & Services| |For Health Professionals| |For Patients & Families| |Find a Doctor|
Welcome to 19th Century, a blog exploring the fascinating era that shaped our modern world. In this article, we delve into the captivating world of late 19th century China, unveiling its rich cultural heritage, political turmoil, and societal transformations. Join us on this journey through time as we unravel the mysteries of an era that propelled China towards a new dawn. The Late 19th Century China: A Glimpse into the Transformation in the 19th Century The late 19th century in China witnessed a significant transformation. The country experienced immense political, social, and economic changes that shaped its future trajectory. One key factor behind this transformation was the clash between traditional Chinese values and Western influence. Western powers, including Britain, France, and Germany, sought to exploit China’s resources and establish spheres of influence. This sparked tensions and conflicts, leading to a series of wars and unequal treaties that undermined China’s sovereignty. The Opium Wars (1839-1842 and 1856-1860) were particularly impactful. The British introduced opium into China, resulting in widespread addiction and social unrest. The subsequent conflict highlighted China’s military weakness and forced the country to cede territory and grant extraterritorial rights to Western powers. China’s ruling Qing dynasty faced mounting pressure to modernize and strengthen its military capabilities. This defensive modernization movement, known as the Self-Strengthening Movement, aimed to adopt Western technologies and institutions while preserving Chinese values. However, progress was hampered by conservative factions within the government. The late 19th century also witnessed the rise of various reform movements. Advocates for change, such as Kang Youwei and Liang Qichao, called for a complete overhaul of China’s traditional system, including adopting constitutional monarchy, promoting education, and embracing Western science and technology. The Boxer Rebellion (1899-1901) marked a significant event during this period. The Boxers, a secret society, rebelled against foreign influence and Christian missionaries. Although ultimately suppressed by an international coalition, the rebellion exposed the deep-seated anti-foreign sentiment and grievances among certain segments of Chinese society. Overall, the late 19th century in China was marked by a tumultuous journey towards modernization and the clash between traditional Chinese values and Western influence. This period laid the groundwork for the eventual collapse of the Qing dynasty and the subsequent establishment of the Republic of China. The Birth of China – Hunters on the Yellow River (20000 BCE to 7000 BCE) History of china (4700-2020) Countryballs What was the state of China during the 19th century? China during the 19th century was characterized by significant changes and challenges. The Qing dynasty, which had ruled China since the mid-17th century, began to experience internal decay and external pressures. The Opium Wars with Britain in the mid-19th century resulted in China’s defeat and forced concessions that opened up its ports to foreign trade and extraterritoriality for Western powers. Internally, China experienced social unrest and political instability. The Taiping Rebellion (1850-1864) was a massive civil war led by Hong Xiuquan, who claimed to be the younger brother of Jesus Christ. It was one of the deadliest conflicts in history, resulting in the loss of millions of lives and widespread destruction. Another challenge for China was Western imperialism and the carving up of spheres of influence. European powers, particularly Britain, France, Germany, and Russia, competed for economic and territorial control over China. This culminated in the scramble for concessions, where foreign powers obtained leases on Chinese territory and established extraterritorial jurisdictions. The late 19th century also saw the rise of nationalism and calls for reform in China. Intellectual movements, such as the Self-Strengthening Movement, emerged with the goal of adopting Western technology and modernizing the country. However, these efforts were often hampered by conservative elements within the imperial court. Overall, the 19th century was a tumultuous period for China. The country faced external aggression, internal conflicts, and challenges to its traditional political and social structures. These events would set the stage for the eventual collapse of the Qing dynasty and the birth of the Chinese Republic in the early 20th century. What was China like in the late 1800s? In the late 1800s, China was going through a period of significant challenges and changes. The Qing Dynasty, which had ruled China for over 200 years, was facing internal turmoil and external pressures from Western powers. Internally, China was struggling with corruption, inefficiency, and social unrest. The government was plagued by corruption within its ranks, leading to widespread dissatisfaction among the population. Additionally, the rigid social hierarchy and the oppressive tax system intensified social inequalities and fueled discontent among the lower classes. Externally, China faced encroachment and aggression from Western powers seeking to expand their influence. European countries, particularly Britain, France, and Germany, were engaged in the “Scramble for China,” carving out spheres of influence and extracting economic concessions. The Opium Wars in the mid-19th century had already weakened China’s position, and by the late 1800s, foreign powers controlled major ports and had established extraterritorial rights in Chinese territory. This era also witnessed the emergence of anti-Qing rebellions and reform movements. The Taiping Rebellion (1850-1864), led by Hong Xiuquan, aimed to overthrow the Qing Dynasty and establish a utopian society. Although the rebellion was eventually suppressed, it further weakened the Qing government and highlighted its vulnerabilities. China’s response to these challenges included attempts at modernization and reform. The Self-Strengthening Movement emphasized the adoption of Western technology and military techniques while preserving traditional Chinese values. However, these efforts were often limited in scope and effectiveness, hindered by conservative factions within the Qing court and resistance to change. The late 1800s also witnessed the rise of nationalist and revolutionary movements. Intellectuals and reformists like Kang Youwei and Liang Qichao advocated for political and social reforms to strengthen China and address the country’s weaknesses. These movements laid the foundation for the eventual collapse of the Qing Dynasty and the subsequent establishment of the Republic of China in 1912. In summary, China in the late 1800s was a country grappling with internal challenges, external aggression, and social upheaval. It was a period marked by attempts at reform, resistance to change, and the seeds of revolution that would shape China’s trajectory in the 20th century. What events took place in China during the 1880s? In the 1880s, several significant events took place in China during the 19th century. The period was marked by political instability and foreign intervention, notably by Western powers and Japan. One of the major events was the First Sino-Japanese War (1894-1895). It began in 1894 when Japan launched a surprise attack on China, aiming to gain control over Korea. The war exposed the weakness of the Chinese military and resulted in a decisive victory for Japan, leading to the signing of the Treaty of Shimonoseki in 1895. Under this treaty, China recognized Korea’s independence and ceded Taiwan and the Liaodong Peninsula to Japan. Another crucial event during the 1880s in China was the Tianjin Massacre of 1870. This incident occurred when a group of French Catholic missionaries and their Chinese converts were brutally murdered by a mob in Tianjin. The massacre caused international outrage and led to the Second Opium War (1856-1860), in which Western powers sought to protect their interests in China and secure favorable trade agreements. Furthermore, during this time, the Qing dynasty faced various internal challenges and rebellions. One significant rebellion was the Taiping Rebellion (1850-1864), which lasted well into the 1880s. Led by Hong Xiuquan, the Taiping Rebellion aimed to overthrow the Qing dynasty and create a new Christian-inspired kingdom. Although eventually suppressed by the Qing forces with substantial loss of life, the rebellion weakened the dynasty and highlighted its vulnerability. Moreover, foreign powers continued to exert their influence and territorial concessions in China. The 1880s witnessed the signing of unequal treaties, such as the Treaty of Tientsin (1858) and the Treaty of Beijing (1860), which further expanded foreign control over Chinese territories and opened additional ports to foreign trade. Overall, the 1880s in China were characterized by foreign aggression, internal strife, and a diminishing of China’s sovereignty. These events not only shaped the course of Chinese history but also set the stage for further conflicts and transformations in the 20th century. Which dynasty ruled China during the 19th century? The Qing dynasty ruled China during the 19th century. Frequently Asked Questions What were the main factors that led to the decline of the Qing Dynasty in late 19th century China? The decline of the Qing Dynasty in late 19th century China was caused by several key factors: 1. Internal Corruption and Inefficiency: The Qing Dynasty suffered from widespread corruption and bureaucratic inefficiency, which weakened the government’s ability to effectively govern the country. This led to a loss of public trust and dissatisfaction among the people. 2. Economic Decline: The Qing Dynasty faced economic challenges, including increasing population pressure, limited land resources, and a growing trade deficit due to the influx of foreign goods. These factors contributed to economic stagnation and increased poverty among the population. 3. Foreign Imperialism: During the late 19th century, China experienced increased foreign imperialist aggression, particularly from Western powers. The Opium Wars, unequal treaties, and the imposition of extraterritoriality eroded China’s sovereignty and undermined the authority of the Qing Dynasty. 4. Rebellions and Unrest: Various internal rebellions and uprisings posed significant threats to the Qing Dynasty. The most notable were the Taiping Rebellion (1850-1864) and the Boxer Rebellion (1899-1901), both of which highlighted the discontent and dissatisfaction among different segments of the Chinese population. 5. Social and Cultural Changes: Traditional Chinese society underwent significant social and cultural changes during this period, leading to tensions between traditional Confucian values and the influence of Western ideas. This cultural clash further weakened the legitimacy of the Qing Dynasty. 6. Weak Leadership: The Qing Dynasty had weak and ineffective leadership during its later years. A series of emperors who were either young or disinterested in governance failed to provide strong leadership or implement meaningful reforms, exacerbating the dynasty’s decline. Overall, a combination of internal weaknesses, external pressures, and socio-cultural changes ultimately led to the decline and eventual collapse of the Qing Dynasty in the early 20th century. How did the Opium Wars impact late 19th century China and its relations with Western powers? The Opium Wars had a significant impact on late 19th century China and its relations with Western powers. The wars, which took place between 1839-1842 and 1856-1860, were fought between China and Great Britain, as well as other Western powers including France and the United States. They were primarily sparked by conflicts over trade, particularly the British trade of opium from India to China. The impact of the Opium Wars on China was profound. In the aftermath of the wars, China was forced to sign a series of unequal treaties known as the Treaty of Nanjing (1842) and the Treaty of Tientsin (1858). These treaties opened up Chinese ports to foreign trade and granted extraterritoriality to foreign citizens, effectively weakening China’s sovereignty and control over its own affairs. Additionally, China was required to pay hefty reparations to the victorious Western powers. The Opium Wars also contributed to the decline of the Qing Dynasty in China. The wars exposed the weaknesses of the Qing government, highlighting issues such as corruption and outdated military technology. This led to widespread dissatisfaction among the Chinese population, ultimately fueling revolutionary movements and calls for modernization. In terms of China’s relations with Western powers, the Opium Wars marked a turning point. They symbolized China’s subjugation to the West and established a pattern of domination and exploitation by Western powers. These conflicts set the stage for further interventions and incursions by Western countries, culminating in the colonization and division of China during the late 19th and early 20th centuries. Overall, the Opium Wars had long-lasting effects on late 19th century China. They exposed the country to Western influence and domination, contributed to the decline of the Qing Dynasty, and set the stage for further Western interventions in China’s affairs. These events continue to shape China’s relationship with the West even today. What were the major social and political reforms undertaken during the Self-Strengthening Movement in late 19th century China? In late 19th century China, the Self-Strengthening Movement was a crucial period of social and political reforms aimed at modernizing the country. One of the major reforms introduced during this movement was the promotion of Western-style industrialization and modern technology. Chinese officials recognized the need to strengthen their military and economic power to resist foreign aggression. As a result, they established factories, built railways, and encouraged the development of industries such as coal mining, shipbuilding, and armaments production. Another significant reform was the establishment of modern educational institutions. Chinese officials realized the importance of education in order to catch up with the Western powers. They set up schools and universities that taught subjects like science, engineering, and foreign languages. This was a step towards promoting a more educated and skilled workforce. However, despite these efforts, the Self-Strengthening Movement ultimately faced several challenges and limitations. One major setback was the lack of central government support and funding. Many officials were resistant to change and preferred to maintain traditional practices. Additionally, the movement’s focus on maintaining Chinese culture and values sometimes conflicted with the adoption of Western ideas. Nevertheless, the Self-Strengthening Movement laid the foundation for future reforms and modernization efforts in China. It highlighted the importance of industrialization and education and set a precedent for future leaders to continue pursuing these goals. While the movement may not have achieved all its objectives, it marked an important turning point in Chinese history and paved the way for subsequent reform movements in the early 20th century. In conclusion, the late 19th century in China was a time of great upheaval and transformation. The country was grappling with the pressures and influences of Western imperialism, as well as internal conflicts and social unrest. The Opium Wars and subsequent treaties imposed by Western powers had a profound impact on Chinese society, economy, and governance. Traditional systems and structures were challenged, leading to a period of intense socio-political reforms and attempts to modernize. During this time, China witnessed the rise of various revolutionary movements and political ideologies, including nationalism, Marxism, and Anarchism. Intellectuals and reformers like Liang Qichao and Kang Youwei advocated for sweeping changes to China’s traditional institutions, advocating for the adoption of Western-style political systems and social reforms. The late 19th century also saw the emergence of influential figures such as Sun Yat-sen, who played a key role in the eventual overthrow of the Qing Dynasty and the establishment of the Republic of China. These events paved the way for further transformations in the early 20th century. The late 19th century in China was a turbulent period, marked by struggles for power, clashes of ideologies, and attempts to navigate the challenges of a rapidly changing world. It laid the groundwork for the dramatic events that would unfold in the following decades, shaping the course of Chinese history.
Roses are red, violets are blue, but how do they end up growing in that old quarry down the road which used to be covered in rocks? Complex plant communities are able to appear in places that were previously barren due to the biological process of succession, without which earth would be a pretty barren rocky place. This week we will be looking at succession and how bare rock can be transformed over time into a lush forest. Curious? Read on! What is succession? The process of a biological community changing over time is known as succession. This begins with a barren lifeless area such as desert or bare rock. This could perhaps be after a volcano has erupted, spewing lava over a landscape which hardens over time, or when a glacier has retreated leaving only rocks behind. These areas have no soil or plants, yet life always finds a way, and slowly the very hardiest of plants called pioneer species start to move in. This includes things like lichens which are able to grow without soil instead inhabiting the bare rocks, and lyme grass which can grow in sand. Instead of getting it’s nutrients from the soil, the lichen extracts nutrients out of rain droplets running off the land, and out of the rock on which it sits. In this process the lichen will start to break down the rock, and some parts of the lichen may die and start to decompose. Slowly, very slowly, this process of pioneer species growing, spreading and dying will occur, often over the space of hundreds of years. Throughout this the bare rock is reduced to many smaller rocks with very old decomposed plant litter scattered about, i.e. a very basic soil. With this thin soil layer new plant species are now able to move in, including things like hardy grasses, and the types of plants you get on road verges. All these plants are pretty small, over time growing and dying too, creating a thicker soil layer. This in turn allows bigger plants to move in who need this deeper soil to survive. These bigger plants often create so much shade for the little earlier plants that they cannot survive, and so the plant community changes from tiny annuals to more established large grasses and perennials that come back each year. And so the process continues, the bigger plants grow and die, alter the soil conditions making it deeper, richer and less stony. Shrubs start to be able to survive as do shade intolerant trees such as pines, which can survive as they are the tallest plants around. Not for long though! The shrubs and pine trees have created a new taller community, and the changes they make to the soil might have changed what can grow there, once again shifting out some of the older hardy but not-very-competitive species. However, these big plants are about to be foisted out themselves. This is by the final stage in succession, where large mature forests made up of shade-tolerant trees rule. This includes trees like oak, maple and hickory, which block out the light from the shade intolerant plants making it hard for them to survive. This final, tall stable community is known as a climax community and is the end point of succession. As a quick aside- forests are usually the climax community in more humid parts of the planet, however other communities like grasslands can be the climax community in less humid environments. Why everything isn’t forests? If nature is constantly going along this process from bare rock to mature forests, why isn’t the whole planet covered in these forests? That’s because the natural world is not stable. Whilst over time plant communities dutifully march along the process of succession, every now and then they get knocked back. Perhaps it is dramatic- a volcano erupts completely covering a plant community in molten lava, killing it and replacing it with bare rock again once the lava cools and starting succession from scratch. Maybe it’s not so dramatic- a forest fire burns all the trees meaning the mature forest disappears and the shade intolerant plants earlier in the stage of succession are no longer overshadowed. The littler plants can survive and grow again, until slowly over time succession changes the community back into a mature forests. These events cause succession to reset to an earlier point where it must start progressing from again. Maybe shrubs and tree saplings are eaten away by grazing animals like sheep or kudu each year. This means the plant community is constantly being reset to an earlier stage and so doesn’t have time for succession to occur before it is reset again. The Scottish Highlands are a good example of this resetting- it is a region that was formerly largely forest covered, yet human felling of trees followed by widespread introduction of sheep mean that it has been shifted from a stable climax community to what is nowadays a stable intermediate community or largely grassy moors much earlier on in the process of succession. The succession which occurs from barren land such as rock and sand is known as primary succession, whilst succession that happens in an area that has received smaller scale disturbance like grazing or even a forest fire but hasn’t had the entire soil community wiped out is known as secondary succession. How do we study it? All this happens over variable timescales, but it can be really really long. Especially the earlier stages of succession can take hundreds of years. Not only do pioneer species need to arrive, often doing this by wind or being somehow dropped, moved or pooped out by an animal, they also then need to grow, die, decompose, have offspring that grow, die and decompose and so on. The later successional stages can take shorter amounts of time once soil is more established and more animals are attracted to the area carrying and dropping more seeds in the process. But so if this all happens over quite a long period of time, how do we study it? One lucky occurrence really helped scientists understand this process. This was a volcanic eruption that happened in 1963. This eruption happened on the sea floor off the southern coast of Iceland, and lasted for four years spewing out lava which formed a volcanic island. This island, named Surtsey, became a really useful model for studying succession. As a remote, uninhabited island it was not disturbed by humans and was protected by being designated a World Heritage site. Being a recent formed island scientists could track exactly when and how the plant community changed. With periodic trips to observe the island, the scientists tracked which were the first plants to arrive (mosses then lichens) and how they arrived (carried by birds who fished in the seas surrounding the island), then how long they took to grow, when and by what they were replaced and so on. By 2004 there were 60 species of vascular plants (e.g. grasses), 75 bryophytes (moss like), 71 lichens, 24 fungi and 89 different species of birds. Whilst shrubs are now present on the island it is still marching along in the process of succession and being studied and scrutinised along the way. The natural world is constantly changing in one way or another, whether it is the cycling of the seasons, the cycle of life in birth, death and decomposition leading to new life, or through the shifts experienced by entire plant communities in succession. As for climax communities, one could argue that there is no real 100% stable climax community as even old oak forests are disturbed by weather and animals and humans. Also assuming that because succession moves in the direction of climax communities these are the ideal plant communities isn’t really true either. Sure, they move in that direction, but unless you are asking a very specific question such as which plant community is better for me to grown my petunias in, one stage of succession e.g. a mature forest isn’t necessarily inherently better or more ideal as an ecosystem than a disturbed grassy area filled with little plants. They’re just different. For more info: A virtual time lapse of an island experiencing succession: Music: kongano.com and MrSethSongs One thought on “44: Succession- from rock to forest”
Asthma is a condition that affects the respiratory system. Though the exact cause of this condition is not known, it usually occurs due to allergic reactions. It results in difficulty in breathing, wheezing, and panting. This article provides information about the various causes of this condition. The term ‘asthma’ has been derived from an old Greek word which means ‘to pant’. It is basically a chronic condition which affects the air passages when they are stimulated by environmental factors or allergens that act as triggers. There are two particular ways in which the air passage respond to asthmatic triggers: 1) hyperresponsiveness and 2) inflammation. When these responses occur, it results in the typical symptoms such as coughing, wheezing, and dyspnea, or labored respiration. Hyperresponsiveness: In this condition when allergens or any other irritants are inhaled, it results in constriction of the smooth muscles. Constriction of the air passages in response to an allergen is a normal reaction that occurs in everybody; however, in asthma patients, it results in a special kind of hyperreactive response. In people who do not have asthma, when an irritant is inhaled, the air passages relax as well as open out in order to expel the irritant from the lungs. However, in those who have asthma, there is no relaxation of the air passages, and instead they become narrow, leading to panting or breathlessness. It is thought that there may be a defect in the smooth muscles of those who are afflicted with this respiratory disorder. Inflammation: Inflammation follows the hyperresponsive stage. When the air passages are subject to allergens or any other environmental triggering factors, the immune system kicks in, delivering immune factors like white blood cells to the area. These cause the air passages to become swollen, fill up with fluid, and produce a sticky, thick mucus. These combine to cause breathlessness, wheezing, the inability to inhale or exhale adequately, and a cough that produces phlegm. This inflammatory response affects everybody afflicted with asthma, even mild cases. What Exactly Causes Asthma? While what exactly causes asthma is still not fully understood, research has shown that it can be triggered by many factors such as genetics, childhood development, improper development of the immune system and the lungs, environmental factors, and various types of infections. Asthma and Genetics Scientists and doctors accept the fact that asthma is a hereditary disease. But, they have not yet identified the gene, or genes, that are involved in this condition. It is thought that the genes that are associated with asthma are linked to the immune system and the lungs. It is widely known that ‘atopic diseases’, like asthma, allergic rhinitis, and dermatitis, occur in some form or the other in families. Asthma and the Immune System Research has revealed that the immune system of adults and children who have respiratory problems responds quite differently compared to others. People who have asthma are generally allergic, and have allergic reactions to factors that cause no problems to others. The immune system of allergic people overreacts when exposed to ordinary substances like cat dander, mold, and pollen. Sometimes, the immune system could even overreact to bacteria and virus, increasing the chances of an asthma attack. Asthma and Childhood The initial months as well as years in the life of a child is a vital period during which he/she could become predisposed to developing this condition. This is due to the abnormalities in the development as well as growth of the lungs. Premature babies are particularly vulnerable to respiratory diseases and infections, since their lungs are not completely developed when they are born. Sometimes, an infection could lead to inflammation, thereby injuring the tissues of the lungs. Asthma and the Environment There are several non-immunologic or non-allergic factors in the environment that can trigger the onset of asthma. When a person susceptible to asthma is exposed to these irritants, there are higher chances of them developing full-blown asthma. Some of them are secondhand smoke for an extended period of time, air pollution, paints, and indoor chemicals. Research is still being conducted to understand better how the above factors affect the development of allergies like asthma.
Kelp and abalone have disappeared off Shikinejima but the excess carbon dioxide in the water there offers useful lessons for aquaculture The volcanic island of Shikinejima is more than 100 kilometers (62 miles) from Tokyo. Many people know it as a paradise for scenic getaways and relaxing vacations. But what they don’t know is that the seas surrounding the island offer a glimpse into how the ocean could behave in future, not just in Japan but across the world. Over the last 10 to 20 years, species such as kelp (Laminariales) and abalone (Haliotis) have been declining significantly off the coast of Shikinejima due to ocean warming and acidification. Researchers at the University of Tsukuba are now describing the ecosystem as “simple,” which means that it’s losing biodiversity and complexity. Meanwhile, the island has long been of interest to them due to underwater volcanoes that release an unusually large amount of carbon dioxide (CO2) into surrounding waters. Researchers believe this offers great potential for studies into the effects of ocean acidification and are investigating how marine life can survive in less alkaline waters. “Shikinejima is home to what we call carbon dioxide seeps,” Dr. Sylvain Agostini of Shimoda Marine Research Center at the University of Tsukuba told the Advocate. “This means that there is CO2 bubbling out from the sea floor in the water column and its concentration is the same high concentration that we expect to see in future. Through our research, we are beginning to see how the ecosystem might develop under such high CO2 levels.” If conditions off Shikinejima become more widespread, there will be increased changes in benthos, species’ habitats and different effects on fish communities. Some species are likely to remain or increase because their preferred food may increase in acidified conditions, while others, such as species whose habitat is coral, will disappear along with the coral. There may also be direct effects on fish behavior such as their ability to detect predators, but it is currently unclear which species will be affected and how. Increased CO2 levels also cause calcium carbonate ions – important building blocks for coral and shellfish shells – to be less abundant. In the long-term, it’s possible that species such as kelp will be replaced by simpler turf algae. Meanwhile, the people of Shikinejima have seen their ecosystem change, as species such as abalone and intertidal hijiki seaweed (Sargassum fusiforme) that were once common are no longer available for consumption or sale. With many economies dependent on fish and shellfish and people worldwide relying on seafood as their primary source of protein, what do the conditions off Shikinejima signify for aquaculture? Professor Jason Hall-Spencer of the School of Marine Science and Engineering at Plymouth University in the UK and a Research Professor at the University of Tsukuba, says that what’s happening off Shikinejima is not all doom and gloom. Not only do CO2 seeps indicate what species can and can’t survive, he says, but they may also have positive implications for aquaculture. “A lot of seaweed does really well at high CO2,” he explained. “When it’s harvested, carbon dioxide is removed from the water. This helps ocean acidification while farms provide an opportunity for jobs and livelihoods. Seaweed can also help tackle eutrophication by removing harmful nutrients such as nitrates and phosphates. Some species, such as calcified seaweeds like coralline algae, don’t do so well in CO2 seeps but aquaculture could stimulate the growth of some of these.” Could shellfish also be farmed off Shikinejima? In recent years, oysters in the United States have failed to grow in aquaculture facilities and natural ecosystems on the West Coast. This appears to be due to naturally occurring upwelling events that lower the ocean’s pH, while anthropogenic CO2 is also having an impact. However, in the Mediterranean, where some areas have similar conditions to Shikinejima, mussels and oysters appear to do badly for another reason – a lack of food. This is in contrast to countries such as the United Kingdom or Germany, where the sea is rich in phytoplankton that oysters and mussels feed on. “Our research into the effects of ocean acidification on shellfish shows that results in the Mediterranean differ from those in the Baltic because of food availability,” said Hall-Spencer. “Oysters or mussels growing off Norway, for example, are unlikely to be damaged by ocean acidification as there is plenty of food in the water. In fact, CO2 stimulates the growth of phytoplankton while the sea off Norway also contains naturally high levels of calcium carbonate. But in the Mediterranean, where there are relatively low nutrients in the water, farms that rely on bivalve filter feeders will struggle because ocean acidification would be an extra stressor on top of a lack of food and warming seas.” Although not as high as the Baltic, the nutrient concentration off Shikinejima is not as low as the Mediterranean, according to Agostini. With the addition of CO2, some plankton, especially phytoplankton such as diatoms, could increase and be a food source for filter feeders. While research continues, Agostini and his team also believe that there is potential to work with aquaculture in an advisory capacity by identifying species that may be threatened in future and helping to develop rearing techniques for them as a safeguarding measure. It may also be possible to breed genetic strains of organisms that are more resistant to ocean acidification, according to Hall-Spencer. “It’s currently unclear how aquaculture will fare off Shikinejima,” said Agostini. “A decrease in pH may reduce the growth rate of some species or there may be less produce available to harvest. Regular feeding could help maintain a higher growth rate but if there is a lot of invasion by species such as turf algae, some biological control may be required. New challenges could arise. Having said that, CO2 seeps could be a solution to testing future aquaculture techniques.” “CO2 seeps will never attract mass aquaculture production as they are so localized and patchy,” added Hall-Spencer. “Growing food there wouldn’t be feasible, but it is possible to find out experimentally what will and won’t survive. In fact, we would encourage such experiments and we are very open to collaborating with aquaculture. They have the equipment for caging, housing and growing organisms at sea, while we have a perfect test site.” Increased ocean acidification will undoubtedly change the growth and abundance of marine organisms, affecting not only the ecosystem but also the food chain. Proposed actions to reduce its impacts include adapting human activities by reducing CO2 emissions and pollution levels, both locally and globally, said Agostini. By studying other CO2 seeps in Italy, Papua New Guinea, Palau and New Caledonia as well as Shikinejima, Agostini and his team are developing an international network to share observations and gain a deeper understanding of the trends and mechanisms that drive shifts in a particular ecosystem. By applying these to other ecosystems, they hope to build a more detailed picture of what the global environment will look like. “I think the [aquaculture] industry can contribute massively to securing food sustainability, removing carbon dioxide from the atmosphere and eliminating ocean acidification, only if it takes the types of organisms that help with the problem, such as seaweed and shellfish, rather than push it in the wrong direction,” said Hall-Spencer. Follow the Advocate on Twitter @GSA_Advocate Now that you've reached the end of the article ... … please consider supporting GSA’s mission to advance responsible seafood practices through education, advocacy and third-party assurances. The Advocate aims to document the evolution of responsible seafood practices and share the expansive knowledge of our vast network of contributors. By becoming a Global Seafood Alliance member, you’re ensuring that all of the pre-competitive work we do through member benefits, resources and events can continue. Individual membership costs just $50 a year. Not a GSA member? Join us. Correspondent Bonnie Waycott became interested in marine life after learning to snorkel on the Sea of Japan coast near her mother’s hometown. She specializes in aquaculture and fisheries with a particular focus on Japan, and has a keen interest in Tohoku’s aquaculture recovery following the 2011 Great East Japan Earthquake and Tsunami. A team of scientists recently made the case against octopus farming, but others believe in its potential. What lies behind this emerging aquaculture opportunity? A new technique for sturgeon farming developed by Kindai University could greatly benefit Japan’s fledgling caviar industry. A technique to farm tiger puffers in hot spring water was invented to revitalize the town of Nasu-karasuyama and is now spreading to other areas of Japan. Innovation & Investment Bluefin tuna may be the most prized fish in the ocean. If hon-maguro sashimi is to remain chic, closed-cycle aquaculture may help keep it on menus.
The U.S. pulp and paper industry uses large quantities of water to produce cellulose pulp from trees. The water leaving the pulping process contains a number of organic byproducts and inorganic chemicals. To reuse the water and the chemicals, paper mills rely on steam-fed evaporators that boil up the water and separate it from the chemicals. Water separation by evaporators is effective but uses large amounts of energy. That’s significant given that the United States currently is the world’s second-largest producer of paper and paperboard. The country’s approximately 100 paper mills are estimated to use about 0.2 quads (a quad is a quadrillion BTUs) of energy per year for water recycling, making it one of the most energy-intensive chemical processes. All industrial energy consumption in the United States in 2019 totaled 26.4 quads, according to Lawrence Livermore National Laboratory. An alternative is to deploy energy-efficient filtration membranes to recycle pulping wastewater. But conventional polymer membranes—commercially available for the past several decades—cannot withstand operation in the harsh conditions and high chemical concentrations found in pulping wastewater and many other industrial applications. Georgia Institute of Technology researchers have found a method to engineer membranes made from graphene oxide (GO), a chemically resistant material based on carbon, so they can work effectively in industrial applications. “GO has remarkable characteristics that allow water to get through it much faster than through conventional membranes,” said Sankar Nair, professor, Simmons Faculty Fellow, and associate chair for Industry Outreach in the Georgia Tech School of Chemical and Biomolecular Engineering. “But a longstanding question has been how to make GO membranes work in realistic conditions with high chemical concentrations so that they could become industrially relevant.” Find your dream job in the space industry. Check our Space Job Board » Using new fabrication techniques, the researchers can control the microstructure of GO membranes in a way that allows them to continue filtering out water effectively even at higher chemical concentrations. The research, supported by the U.S. Department of Energy-RAPID Institute, an industrial consortium of forest product companies, and Georgia Tech’s Renewable Bioproducts Institute, was reported recently in the journal Nature Sustainability. Many industries that use large amounts of water in their production processes may stand to benefit from using these GO nanofiltration membranes. Nair, his colleagues Meisha Shofner and Scott Sinquefield, and their research team began this work five years ago. They knew that GO membranes had long been recognized for their great potential in desalination, but only in a lab setting. “No one had credibly demonstrated that these membranes can perform in realistic industrial water streams and operating conditions,” Nair said. “New types of GO structures were needed that displayed high filtration performance and mechanical stability while retaining the excellent chemical stability associated with GO materials.” To create such new structures, the team conceived the idea of sandwiching large aromatic dye molecules in between GO sheets. Researchers Zhongzhen Wang, Chen Ma, and Chunyan Xu found that these molecules strongly bound themselves to the GO sheets in multiple ways, including stacking one molecule on another. The result was the creation of “gallery” spaces between the GO sheets, with the dye molecules acting as “pillars.” Water molecules easily filter through the narrow spaces between the pillars, while chemicals present in the water are selectively blocked based on their size and shape. The researchers could tune the membrane microstructure vertically and laterally, allowing them to control both the height of the gallery and the amount of space between the pillars. The team then tested the GO nanofiltration membranes with multiple water streams containing dissolved chemicals and showed the capability of the membranes to reject chemicals by size and shape, even at high concentrations. Ultimately, they scaled up their new GO membranes to sheets that are up to 4 feet in length and demonstrated their operation for more than 750 hours in a real feed stream derived from a paper mill. Nair expressed excitement for the potential of GO membrane nanofiltration to generate cost savings in paper mill energy usage, which could improve the industry’s sustainability. “These membranes can save the paper industry more than 30% in energy costs of water separation,” he said. Provided by: Georgia Institute of Technology More information: Zhongzhen Wang et al. Graphene oxide nanofiltration membranes for desalination under realistic conditions. Nature Sustainability (2021). DOI: 10.1038/s41893-020-00674-3 Image: Paper mills use large amounts of water in their production processes and need new methods to improve sustainability. Credit: Georgia Tech
The National Lottery was established in the Free City of Kraków (1815–1846) by the Representative Assembly in 1821. It was the source of revenue for the treasury and was leased to a private entrepreneur, Florian Straszewski, known for having contributed to the creation of the Planty Park in Kraków. The lottery was divided class and numerical lotteries, each of which was organised according to different regulations, but with the same lottery administrator (Directorate of the Lottery). It was supervised by the city Senate. The latter issued several laws which regulated the organisation and functioning of the Directory of the Lottery, the sale points, and State Commissioners who were involved in the lottery drawings. The article also discusses the conditions of a typical contract signed with F. Straszewski and the “Plan loterii klasowej” (Plan of the Class Lottery) of 1840. The National Lottery functioned between 1822–1844. Go to article
In this 9th grade math worksheet1, you can find 10 problems on math which are in the level of 9 th grade 1. Rationalize the denominator of below and find the value of x ² + y 2.The dimensions of a rectangular metal sheet are 4m x 3m. The sheet is to be cut into square sheets each of side 4cm. Find the number of squares. 3.Find the area of the shaded portion. 4.Find the value of x of the following 3 log ₓ 5 = 1 5.Out of 45 houses in a village 25 houses have Television and 30 houses have radio.Find how many of them have both. 6.In a circle with center 0 and radius 17 cm. PQ is a chord at a distance of 8cm from the center of the circle.Calculate the length of the chord. 7.If a + b = 2 and a² + b² = 8,find a³ + b ³ 8.Find the angles of x and y marked in the below given below 9.The ages of Atul and Anil are in the ratio 5:7.If Atul was 9 years older and Anil 9 years younger,the age of Atul would have been twice the age of Anil. Find their present ages. 10.The sum of the digits of a two digit number is 10. If the number formed by reversing the digits is less than the original number by 36, find the required number. 3. 168 cm ² 8. 25 °,65 ° 9. 15 years, 21 years May 26, 23 12:27 PM May 21, 23 07:40 PM May 20, 23 10:53 PM
You are currently viewing an archived version of Topic Spatial Queries. If updates or revisions have been published you can find them at Spatial Queries. Spatial query is a crucial GIS capability that distinguishes GIS from other graphic information systems. It refers to the search for spatial features based on their spatial relations with other features. This article introduces a spatial query's essential components, including target feature(s), reference feature(s), and the spatial relation between them. The spatial relation is the core component in a spatial query. The document introduces the three types of spatial relations in GIS: proximity relations, topological relations, and direction relations, along with query examples to show the translation of spatial problems to spatial queries based on each type of relations. It then discusses the characteristics of the reasoning process for each type of spatial relations. Except for topological relations, the other two types of spatial relations can be measured either quantitatively as metric values or qualitatively as verbal expressions. Finally, the general approaches to carrying out spatial queries are summarized. Depending on the availability of built-in query functions and the unique nature of a query, a user can conduct the query by using built-in functions in a GIS program, writing and executing SQL statements in a spatial database, or using customized query tools. Spatial Analysis: In GIS, spatial analysis is a collective term that refers to any process that manipulates or synthesize spatial data to explore spatial patterns or to examine spatial relationships among geographical features. It embraces a broad spectrum of spatial data techniques such as spatial queries, vector and raster GIS data handling operations, and spatial statistics. Spatial query: A search of features based on their spatial relations with other features. It is a crucial comprising part of spatial analysis in GIS. Spatial relation: A relationship between spatial features with regard to their spatial locations and spatial arrangements. Three general categories of spatial relations have been identified in the GIS&T literature, including proximity (or distance-based) relations, topological relations (e.g., connectivity, containment, and adjacency), and direction relations. Feature: A digital representation of a geographic object (e.g., a house, a road segment, a county) or event (e.g., a traffic accident) located in space. A feature in a spatial database is represented with data of its spatial footprint and attribute information. Feature class: a collection of geographic features of the same kind. Topological relations: The type of spatial relations unaffected by bi-continuous transformation, such as stretching, shifting, rotating, or bending, of the involved spatial features. Proximity relations: They are also called distance-based relations and refer to the spatial relations based on distances between features. Direction relations: A spatial relation based on the angular separation of one feature relative to another feature in a coordinate system. Specifically, when the angular separations are expressed verbally as cardinal directions such as north and south, they are also called cardinal direction relations. 2.1 What is a spatial query? Spatial queries are a critically important type of spatial analysis. A spatial query selects spatial features based on their spatial relationships to other features and are used to answer spatial questions. For instance, a researcher needs to identify crime sites in a study area, and another person tries to find locations of all traffic accidents along some pre-defined roads. These spatial questions can be translated into respective spatial queries. Here spatial queries can be used as the sole spatial analysis method to answer these spatial questions. In addition, spatial queries can also be a constituent part of multi-step spatial analysis. For explanation, we first define the critical components in a spatial query. The collection of candidate spatial features to be selected from are termed target features, while the spatial features used as reference locations are called reference features. For example, in the query “find buildings in census tract A,” all buildings in the study area are target features, and Tract A is the reference feature. The third component is the spatial relation(s) between the target and reference features. Depending on the reference feature type, a spatial query may involve one or more GIS feature classes. The following are three possible scenarios. - The reference feature(s) and target features are of the same type and stored in the same GIS files. In this case, only a single GIS feature class is needed. An example query is “which cities are within 200 miles of Atlanta.” Here the reference feature is the pre-defined or pre-selected city (Atlanta), and the target features are also cities. - The reference feature(s) and target features are of different types. In this situation, two feature classes are involved in the spatial query. The abovementioned query “find buildings in census tract A” is an example of this scenario. The two feature classes are the census tract and the buildings. - Reference location is created on the fly. Sometimes, a user may wish to conduct a spatial query interactively to select features by a reference location entered on the fly. The interactive spatial queries are particularly popular in web-based GIS services. In this situation, the spatial query only requires target features to be provided in advance. For instance, the USGS national map viewer is a web service for viewing and downloading GIS data. The service provides GUI tools for users to define a selection boundary by interactively drawing a polygon, a rectangle, or a circle. While the target features and reference features are necessary, a query’s critical component is the spatial relation between the two sets of features. Ultimately, the query results are the subset of target features that satisfy the spatial relation. It is demonstrated by the equation below where SR refers to a spatial relation. Query results = target features [SR] reference features 2.2 Spatial Relations and Spatial Queries Three types of spatial relations have been studied and have received considerable research attention in the GIS&T literature: proximity relations, topological relations, and directional relations. 2.2.1 Proximity relations Proximity relations are distance-based and are also referred to as distance relations. A proximity relation can be expressed either quantitatively as metric distances or qualitatively as verbal descriptions such as near or far. A GIS software program typically has powerful built-in capabilities to calculate various types of quantitative distance measures. In spatial queries, the most commonly used are Euclidean distances and distances in a connected network. Table 1 provides a real-world query example for the corresponding distance measure. QE1 (query example 1) searches for buildings in a proximal area exposed to noise hazards from a state highway. It adopts the Euclidean distance to search for buildings within 1 mile of the highway segment. In QE2, the concern is about the travel distances to healthcare facilities. Qualitative expressions of proximity are often needed in spatial queries in everyday lives. For example, QE3 inquiries about nearby hotels of a conference venue in Chicago. Not many GIS programs currently support spatial queries with qualitative proximity relations, although theoretical discussions and modeling strategies are available in the literature. One approach is to establish fuzzy mapping mechanisms between qualitative and quantitative measures, contingent upon context variables (Yao & Thill 2005; 2006). Also, some online GIS services and open-source tools are available to provide spatial search capability with qualitative distances. 2.2.2 Topological Relations Topology is a branch of study in mathematics. It studies the characteristics of spatial relations invariant by bi-continuous transformations such as stretching, shifting, rotating, or bending. Adjacency, connectivity, and containment are typical examples of topological relations. A naïve view of topology sees the relations as geometry on a rubber sheet, as topological relations between two spatial features on a rubber sheet are preserved even when the sheet is stretched, shifted, rotated, or bent. A large body of research has focused on formalizing and reasoning topological relations, ranging from the point-set theory (Egenhofer and Franzosa 1991), the intersection model (Egenhofer and Franzosa 1991) and its extensions, to the Region Connection Calculus (Randell et al. 1992) and its extensions (e.g., Cohn and Gotts 1996). Depending on the two involved features' geometric types, different sets of possible topological relations may exist between them. Table 2 illustrates some common topological relations, cross-tabulated by the geometric type of the reference feature(s) and that of the target feature(s) in a spatial query. It is far from an exhaustive enumeration of topological relations. Many other nuanced variations exist, and different vocabulary may be used to describe identical or similar relations. For instance, Egenhofer (1991) discussed more English terms that express topological relations. Table 2. A Classification of Some Common Topological Relations Between Two Spatial Features Spatial queries can be based on a variety of topological relations (Table 3). In QE4, a county has multiple internet service providers, and the query is to find which public office locations can be served by a specific provider MP. The polygons in blue are the service areas by MP, which are reference features. The target features are all the point locations of public services and offices. This spatial problem can be translated into the “Contained_by” topological relation between the target features and the reference features. The final query results are shown in red. QE5 can be translated into the “intersect” relation between the reference and target features. QE6 is a query based on the adjacency topological relation. 2.2.3 Direction Relations Direction relations are based on the angular separation between two spatial objects, as viewed from the reference point. Just like proximity relation, a direction relation can also be expressed either qualitatively or quantitatively. A quantitative measure of the direction from a reference feature to a target feature is relatively easy to calculate in GIS. In Figure 1, the direction-based spatial query is to find buildings in the study area in the downwind from a reference feature (QE7). Different reasoning models may be possible. In this illustrated example, a hypothetical parallelogram is created along the window direction. The query results would include all the buildings that are entirely within or intersect with the parallelogram. Figure 1. A direction-based search from a reference feature. Source: author. Compared with the quantitative directions, qualitative direction measures are used more often. They are also referred to as cardinal directions such as north, south, east, west, southeast, southwest, northeast, and northwest, which are defined by a look-up table indicating the corresponding range of angles for each direction. These cardinal directions are not directly understandable by GISs. Modeling direction relations in a computer system has attracted much research attention in the past decades. The earlier frameworks, such as the cone-shaped (or triangular) model (Peuquet and Zhang 1987) and the projection-based model (Frank 1996), have laid the foundation for more recent extensions. Figure 2(a) illustrates the framework of the cone-based model. Figure 2(b) is an application example of implementing the model for spatial queries. In QE8, the query investigates cabins (target features) to the south of the lake, the reference feature. From the reference feature's geometric center, the model partitions the surrounding geographic space into eight sectors corresponding to the eight cardinal directions, respectively. The target features in the S sector are the query results. Figure 2. Cone-based model (adapted from Frank 1996) and its application to answer a query example (QE8: “which cabins are to the south of the lake?"). Source: author. The projection-based model is another influential framework. As shown in Figure 3(a), the projection-based model singles out a central area, which can be the bounding box of the reference feature, and partitions the outside areas into eight regular direction tiles corresponding to the eight cardinal directions. Based on the framework, some spatial analytical models have been further developed to deal with more complex situations or make the process more computationally plausible. Among them, the direction relation matrix (DRM) model is a widely adopted example. The DRM model (Goyal & Egenhofer 2001) formalizes the reasoning process by defining a direction relation with a matrix expressed in Equation (1). If an area is considered a set of all points within that area, the areas in Equation (1) refer to those point sets illustrated in Figure 3(b). The set intersection operation of two sets, denoted as Ç, produces the subset of points that are in both sets. The model can deal with more complex situations, for instance, when a target feature crosses multiple direction tiles. Figure 3. Projection-based model (adapted from Frank 1996). Source: author. Based on the framework, some spatial analytical models have been further developed to deal with more complex situations or make the process more computationally plausible. Among them, the direction relation matrix (DRM) model is a widely adopted example. The DRM model (Goyal & Egenhofer 2001) formalizes the reasoning process by defining a direction relation with a matrix expressed in Figure 4. If an area is considered a set of all points within that area, the areas in the matrix refer to those point sets illustrated in the map of Figure 4. The set intersection operation of two sets, denoted as Ç, produces the subset of points that are in both sets. The model can deal with more complex situations, for instance, when a target feature crosses multiple direction tiles. Figure 4. Illustration of point sets and the definition equation for the direction-relation matrix (adapted from Goyal & Egenhofer, 2001). Source: author. 2.2.4 Spatial Queries based on Multiple Spatial Relations A spatial query does not have to be limited to one spatial relation only. It is not rare to find a query based on a combination of multiple spatial relations. This may happen due to several reasons. Discussed here are just two common reasons. First, it may be due to the nature of the query problem. For instance, QE7 and QE8 might need to be modified in the real world to find houses or cabins within certain threshold distances. The modified queries would combine a proximity relation and a direction relation. Second, multiple spatial relations are sometimes necessary with practical considerations of precision or other data quality issues. For example, a user wants to find all traffic accidents on a specific highway. This can be translated into a spatial query based on the topology relation “touch” between a point and a line feature, as listed in Table 2. However, due to precision and accuracy reasons, many qualifying accident locations would be missed if only the “touch” topological relation is considered. The problem can be resolved by modifying the query to include all accident locations within a threshold distance to the line feature. The modified query combines topology and distance relations. As discussed above, spatial reasoning frameworks and analysis models have been developed for each type of spatial relations. Although some of them are integral parts of popular GIS software programs, not all of them have been developed into software tools in the GIS programs. Depending on the availability of functions and tools in off-the-shelf GIS software, there are generally three approaches to carrying out a spatial query. The most popular way is to use inherent spatial query functions in a GIS program. The second is to run SQL statements in GIS or any general-purpose spatial database management system. The last approach is to develop customized tools for queries. While each method has its advantage and disadvantages, the good news is that their edges are complementary to each other. - Spatial queries with innate functions in GIS software. As the spatial query capability is crucial for GIS, almost all GIS software programs have at least some built-in spatial query functions from the user interface. Currently, all popular GIS programs have innate functions for distance-based queries, except for qualitative distance. Many of them have built-in capabilities to answer spatial queries based on topological relations, and some of them can handle those based on a combination of two spatial relations. Results are returned on-the-fly for interpretation and further processing. This is the most straightforward and most widely used approach. The limitation of this approach lies in the constraints of existing functions provided by the software in use. - Spatial queries with structured query language (SQL). Because a spatial query is about selecting features based on spatial relations, it can be expressed as SQL statement(s) by translating the query components into search criteria. The execution of the SQL statement returns spatial features that satisfy the search criteria. This can be conducted in any spatial database management system that supports SQL. Some GIS software provides the interface for SQL expressions. For instance, ArcGIS provides query building dialog tools that allow users to build SQL statements. Likewise, these statements can also be constructed and executed in other database management systems, such as PostgreSQL and Oracle. Performing a query using SQL statements allows for more flexibility. Compared with the approach using built-in GIS functions, the SQL statements method leaves room for user-designed search criteria, working in a broader selection of software environments. However, writing native SQL queries lacks the interactivity and convenience provided by the built-in functions. - Spatial queries with customized tools. The first two approaches are sufficient for the need of most spatial queries. But in some rare cases, when a unique model is needed or a particular type of query is asked, neither of the previous two approaches may be helpful. The third approach, developing customized software tools, is the solution in this situation. The tools can be loaded as add-in functions to existing GIS software or as stand-alone packages. Some developed tools for specific types of queries have been shared in open-source repositories such as GitHub for interested people to use. This approach is most effort-intensive and requires programming skills. Thus it is the most challenging approach if one has to start from scratch. The tradeoff is that this approach does provide the best flexibility and therefore is most suitable when a high level of customization is needed. Cohn, A.G and N.M. Gotts. (1996). the ‘egg-yolk’ representation of regions with indeterminate boundaries. In Geographic Objects with Indeterminate Boundaries, ed. P.A. Burrough and A.U. Frank, pp.171-187. Bristol, PA: Taylor & Francis. Egenhofer, M.J., Franzosa, R., (1991), Point-set Topological Relations. International Journal of Geographical Information Systems, 5(2): 161-174. DOI: 10.1080/02693799108927841 Frank, A. U. (1996). Qualitative Spatial Reasoning: Cardinal Directions as an Example. International Journal of Geographical Information Systems, 10(3):269–290. DOI: 10.1080/02693799608902079 Goyal R.K. and Egenhofer M.J. (2001) Similarity of Cardinal Directions. In: Jensen C.S., Schneider M., Seeger B., Tsotras V.J. (eds) Advances in Spatial and Temporal Databases. SSTD 2001. Lecture Notes in Computer Science, vol. 2121. Springer, Berlin, Heidelberg. Peuquet, D. and C.X. Zhang. (1987). An algorithm to determine the directional relationship between arbitrarily-shaped polygons in the plane, Pattern Recognition 20 (1): 65–74. DOI: 10.1016/0031-3203(87)90018-5 Randell, D.A., Z. Cui, and A.G. Cohn. (1992). A Spatial Logic Based on Regions and Connection. In Proc. 3rd International Conference on Knowledge Representation and Reasoning. pp. 165-176. Yao, X. and Thill, J.C. (2005). How far is too far – a statistical approach to proximity modeling. Transactions in GIS. Vol. 9(2): 157-178. DOI: 10.1111/j.1467-9671.2005.00211.x Yao, X. and Thill, J.C. (2006). Spatial queries with qualitative locations in spatial information systems. Computers, Environment and Urban Systems. 30(4):485-502. DOI: 10.1016/j.compenvurbsys.2004.08.001 - Describe and differentiate between the components of a spatial query. - Explain the three general types of spatial relations. - Translate spatial problems into spatial queries when appropriate. - Differentiate between the general approaches to carrying out spatial queries and identify the most suitable approach(es) in a specific situation. - Which of the following is not a spatial relation that can be used in spatial queries? - Distance-based relation - Spatial autocorrelation - Topological relation - Direction relation - There is a GIS dataset of points of interest (POIs) in a region, and you would like to select only those located within a pre-defined study area. How will you translate it into a spatial query? - As illustrated in this figure, below, all point features are government office or service locations. A county government project starts from finding the locations within 2 miles of state highways in that county. What type(s) of spatial relations will you need to use for this spatial query? - Read each of the following statements and decide whether it is true or false. - Distance-based spatial relations, or proximity relations, can be expressed either quantitatively or qualitatively. - Topological spatial relations can be expressed either quantitatively or qualitatively. - Direction relations can be expressed either qualitatively or quantitatively.
Class School Research Classes Grade college science classes teach regarding the earth plus the solar system. They also tutor about fundamental machines, chemical reactions, plus the properties of air, drinking water, and minerals. These classes are very different via research research, but they still involve hands-on activities and science jobs. Grade school science is usually taught using a program framework that varies from talk about to state. The subjects may include training resource materials, grade-level training benchmarks, or ways to learning. Several states require that subject areas be trained to specific grade artists. Unlike classic research science, grade university science classes have many hands-on activities, and some educators are encouraged to make use of technology bo finneman to promote conversation. Students may work with live maritime animals, build a great aquarium, and run tests. The sixth grade research curriculum combines reading, producing, and basic biology. Students study human influence on the environment, and explore the consequences of climate switch. In addition , they study basic physics, including samsung s8500 motion, plus the biological devices in the body. Last grade lessons focus on Earth’s natural methods, as well as platter tectonics, volcanoes, and landforms. The unit test and lesson plans are available for anyone. Second grade scientific research lessons give attention to the history of the Earth. Pupils also learn about the solar system as well as the life never-ending cycle of crops. During this time, pupils become fluent writers and readers. Third grade scientific discipline lessons are the solar system, the sunlight, the moon, and the planets. During this time, pupils conduct scientific experiments and write detailed tales about the solar system. By fourth class, students possess expanded their particular vocabulary and write complex documents. Leave a ReplyWant to join the discussion? Feel free to contribute!
When two different metals come in contact with each other in presence of an electrolyte (e.g. water), they form a galvanic cell in which the lesser noble metal (e.g. Zn) corrodes in favour of the more noble metal (e.g. steel). This electrochemical reaction is the base for the complex field that is cathodic protection. Galvanic, cathodic protection, or active protection, arises from zinc (the anode) sacrificing itself in favour of the base metal -steel (the cathode) with the resulting flow of electrons preventing steel corrosion. In this way the protection of the metal is guaranteed, even when the zinc layer is slightly damaged. Other well-established methods of cathodic protection include hot-dip galvanising (HDG) and zinc thermal spraying, both of which exhibit a constant sacrificial rate of the zinc layer. With ZINGA the sacrificial rate reduces dramatically after the zinc layer has oxidised and the natural porosity has been filled with zinc salts. Additionally the zinc particles within the ZINGA layer are protected by the organic binder without adversely affecting the electrical conductivity. This enables ZINGA to create nearly the same galvanic potential between the zinc and the steel as hot dip galvanising but with a lower rate of zinc loss because, put simply, the binder acts as a “corrosion inhibitor” to the zinc.
Have you ever stared up at the night sky and thought about becoming an astronomer? It seems kind of exciting to think about a career observing and studying the planets, stars, and other things that make up our universe. Plus, you'd get to use all sorts of cutting-edge technology, like space telescopes. You would probably soon learn a curious fact, though. Despite the WONDERs of modern technology, in some ways it was easier to be an astronomer hundreds of years ago. But how could that be? Hundreds of years ago, technology was primitive. Back then, our understanding of the universe was nothing like it is today. And yet ancient astronomers had a huge advantage: they could see the stars in the night sky. If you go outside after dark, you can probably see many stars on a clear night. But can you see the Milky Way galaxy arching across the night sky? In most places, the answer is probably not. The truth is that most of us can't see the Milky Way or even a fraction of the stars our ancestors could thanks to light pollution. It has several components, including skyglow (brighter night skies over cities), light trespass (the presence of light where it's not needed), glare (brightness that hurts the eyes), and clutter (groups of light sources that create unnecessary brightness and confusion). The sources of light pollution are easy to see. Just go outside after dark and look around at all the lights you see: street lights, exterior building lights, advertising signs, factories, to name a few. While many of these lights seem necessary, many are not. Poor lighting design often results in light directed outward and upward, when it really only needs to be directed downward in a narrow area. Outdoor lights also tend to be brighter than necessary, wasting energy in the process. The result of the proliferation of artificial lighting is that our nighttime skies glow, particularly around cities. A recent scientific study estimated that 80% of the world's population lives under skyglow. You've probably lived with skyglow your entire life. Most people don't even know what they're missing by not being able to see a dark, star-filled sky. Astronomers know the difference, though. Light pollution continues to be a problem for modern astronomers, who must search diligently for the darkest places on Earth to set up their instruments. If you're not into astronomy, light pollution might not seem like a big deal. Scientists, however, have begun to point out other negative effects of light pollution. For example, inefficient, unnecessary lighting needlessly wastes precious energy. Scientists also believe light pollution disrupts nature and affects human health by disrupting the natural cycle of day and night. The effects of light pollution on nocturnal creatures, for example, are only beginning to be understood. Scientists believe light pollution could affect feeding, migration, and reproduction in many species.
You can play Addition War or Subtraction War card games for fun practice of addition and subtraction math facts. You can have them help you measure ingredients when you are cooking. Use the term halves and quarters. You can talk about how long things take to do (time). Cooking dinner, brushing teeth, watching a tv show, driving to the store, driving to another state, read a chapter book, ect. Use the words: seconds, minutes, hours, and days. Math Papers: Coming home there will be papers that say "Problem Set" and papers that say "Homework". Problem Set papers usually have been partially completed in class. First Grade does not officially assign math homework, however, these homework pages can be done at home for practice but they are NOT required to be returned. There may also be pages labeled "Exit Ticket". If this page is not completed it can also be done at home, but again it is not required. If you choose to have your child do the homework or exit ticket, the problem set paper for the same lesson has similar types of problems that can be used to explain the other pages problems. Any template pages can be thrown away. Sprint/Fluency pages are timed in class for 90 seconds. Most likely the page will not be complete. The Exit Ticket sheet is used to create quizzes. If possible have your child do the exit ticket for practice for their quizzes. All of these pages are optional. If you have questions on the math, email me your question or write a note on the math paper. Homework: Your child should be reading or be read to, 20 to 30 minutes every day. This can be what was sent home to read, magazines, comics, books, letters, the stories from previous reading book "My Book", phonetic stories, ect. If you are in need of books for your child to read, please let me know and I will send some home with your child. They can exchange them each day or week. Your child will also bring home their reading books "My Book", as we finish them in class. Read the stories in the book.Most of the worksheets that we do in class have a home connection activity at the bottom of the paper. You can do any of these with your child if you would like. If you find papers that are not complete, go over what isn’t finished with them. You can ask your child how to spell words as you are driving in the car or making dinner. You can ask your child to write lists for you or write a sentence about their day. Coloring books and working with clay improve fine motor skills that help with penmanship.
Part of creating bully-free, safe and positive learning environments has a lot to do with teaching kindness. Here’s a list of lessons and activities that will promote kindness in your classroom. CSI Private Eye is a series of fascinating true stories that students interact with like a video game, but gets them reading, writing and problem-solving along the way. Find out why it’s the ultimate digital resource for your classroom. Visualising is the reading strategy that helps your students create a picture in their head of what they’re reading, almost as if your students are making videos or movies in their heads, all built from their background knowledge, their imagination, and the content of the text. Here are some tips for teaching students to visualise. Too often “teaching” comprehension often involves students reading a text and answering a set of questions about it. It should be the other way around – to be a proficient reader, students need to ask questions. Here’s how to teach your students to ask questions. Familiar with the phrase “reading between the lines”? Drawing inferences from the text is like reading between the lines – it’s looking for the meaning that isn’t written down word-by-word. Here are some ways you can help students develop this important reading strategy. Making connections is a reading strategy that helps students find meaning in a text by connecting it to their own life. It's particularly important for English language learners. Here are some ways to teach students to make connections in their reading. Lynlee Lawrence is a head teacher at Hutt International Boys School in Wellington, New Zealand. Lynlee says CSI Private Eye is “the best online resource I've come across”. We sat down with her to find out why.
Last updated on November 19, 2018 at 17:16 Immunohistochemistry uses the antigen-antibody interaction to detect a specific antigen in a tissue. We add an antibody that has an enzyme attached to it to the tissue. We also add the substrate for this enzyme and an inactive color molecule called the chromogen. When the antibody binds its antigen in the tissue, the enzyme will convert the enzyme substrate into a different product. This product will then activate the color product, making it colorful. This colorful reaction can be seen in the microscope and can be used to tell whether a certain type of cancer has a specific antigen or not, which can be important for the treatment. It results in images like these. Both slides are of a breast cancer. The upper cancer is positive for an antigen called HER2, while the lower cancer is negative for the same antigen. The treatments will therefore be different. In this technique antibodies are also used and are also attached with something. In this case however, the antibodies are attached to a type of molecule called fluorochrome and not an enzyme. A fluorochrome is a molecule that emits a specific color when a specific wavelength of light is shined upon it, which is done by a special microscope called a fluorescent microscope. It results in images like this (after photoshop): Cell surface markers
Back to the Table of Contents A Review of Basic Geometry - Lesson 3 Angles, and More Lines Astronomical events dictate the day, lunar month, and year. When man wanted to subdivide a day, the sundial proved useful. you will find geometry activities with a sundial theme. There are many variations on the sundial theme. However, one vocabulary word from this is gnomon, the part of the sundial which casts the shadow. For a horizontal sundial, what is the appropriate angle this makes with the horizon? A visit here might be useful. - Sundials, Gnomons, Eclipses, and Transits - Angles: Basic, in Pairs, In Relative Positions, From Trigonometry (reference, central, inscribed) - Assumptions: Valid and Invalid - Algebraic Properties - Lines: Parallel and Perpendicular - Proof Arguments: why, paragraph, and two column - Rules for Constructing, Drawing, Sketching - Mazes and Labyrinths Another study of angles involves The student is assumed to be familiar with such events, especially, the pair of venus transits in June 2004 and 2012. These are rare enough that no one alive has seen one. An angle is the union of two rays with the same endpoint. The rays forming an angle are its sides. The vertex is the common endpoint of the two rays. The symbol for angle () can easily be confused with the symbol for less than (<). One important classification of angles is based on their angle measure. Angles are usually measured in either degrees or radians. (See also an introduction in Numbers Lesson 11.) Occasionally we see references to grade (1 grade=0.9°). Although some (our text) omit the degree symbol, we don't expect you ever to. There are 360° or 2 radians in a complete revolution or circle. It is important to differentiate between an angle (a set of points) and its measure denoted m. A zero angle is where the two rays share the same points and coincide. They thus appear to be a single ray. It is so named because of its zero angle measure. An acute angle is one whose measure is strictly between zero and 90°. A right angle is one whose measure is exactly 90°. On a drawing the symbol can be used to label two rays or segments as perpendicular (at right angles). In symbols we write that one line or segment is perpendicular to another using the symbol . An obtuse angle is one whose measure is strictly between 90° and 180°. A straight angle is one in which both rays are opposite and form a [straight] line. Its measure is 180°. Not all geometries accept zero and straight angles as angles, often specifying that the component rays must be nonidentical and nonopposite. Two more angle terms are for angles between 180° and 360° and or 360° (one revolution). Angles can sometimes be named by their vertex alone but since more than one angle often share a vertex this "nickname" can be ambiguous and a full specification using three points (BAC) is thus necessary. At other times angles may be designated via a number (see below). Angles often come in pairs which might have special names depending on their relative size or location. Adjacent angles are angles which share a side or ray. A pair of complementary angles sum to 90°. A pair of supplementary angles sum to 180°. A linear pair of angles are both supplementary and adjacent. Vertical angles share a vertex and are formed by extending each ray though the vertex. The letter X provides some good examples. The angle opening upward and the angle opening downward are vertical angles as are the pair opening left and right. Vertical angles have an easily proved property given below. You can prove this property by noting that these two angles are supplementary to the same angle. Vertical angles are equal. Angles separate their plane into three regions: the angle; the region inside or interior to the angle; and the region outside or exterior to the angle. For angles of measure between 0° and 180°, the interior is the convex set. A ray which extends from the vertex of an angle through the interior such that it divides the angle into two congruent (same shape and measure) angles is called an angle bisector. Bisect means to cut into two equal pieces. Angles are also formed when a transversal intersects a pair of lines, which are not necessarily parallel. A prototypical example would be the not equal symbol: . Angles which are in the same relative position but on the other line are called corresponding angles (angles 1 and 5 in the figure below right). The word alternate is often used to indicate things on opposite sides, like some leaves on a stem. In this context it means on opposite sides of the transversal. The word interior refers to the region between the lines, whereas, the word exterior refers to the regions outside the lines. We refer to those angles between the lines as interior angles (angles 3, 4, 5 and 6) and those outside the lines as exterior angles (angles 1, 2, 7 and 8). (Note also below the use of interior/ exterior angles as applied to polygons.) Thus when we have a transversal cutting two lines (not necessarily parallel) we have alternate interior angles (angles 4 and 6 below right) and alternate exterior angles (angles 1 and 7 below right). If the pair of lines are parallel, we have the following important properties. Two lines cut by a transversal are parallel if and only if:| - Alternate interior angles are equal in measure; or - Alternate exterior angles are equal in measure; or - Corresponding angles are equal in measure. The angle between two sides of a polygon is an interior angle, whereas the angle formed by one side and extending the other side of an angle in a polygon is an exterior angle. They form a linear pair. Examples of interior angles would be those labelled x and 60º in the figure left. The angle labelled (2x+4)º is an exterior angle. We can also differentiate between the interior and exterior of a polygon by noting that your right side is toward the interior as you travel clockwise around a closed figure or on your left side as you travel counter-clockwise or anti-clockwise as they say on the other side of the "pond." There are also angles which are especially important in trigonometry. First, in trigonometry the angle vertex is placed at the origin of a coordinate system with one ray coincident with the x-axis. Angles are then measured as positive if measured counter-clockwise from the x-axis or as negative if measured clockwise from the x-axis. This is called standard position. Our geometry differentiates between rotation and angle. Thus in trigonometry angles can be more than 180° or less than 0°, whereas in geometry only rotations can be. The reference angle is then the acute (or right) angle between the ray and the x-axis. It is always between 0° and 90° inclusive. An angle in standard position is a quadrantal angle whenever its terminal side lies on an axis. Quadrantal angles are thus integer multiples of 90°. Coterminal angles are angles which share the same sides, such as 120° and -240° or 90° and 450°. Coterminal angles differ by an integral multiple of 360° or 2 radians. Angles inside circles are either central angles if their vertex is the center of the circle, or inscribed angles if their vertex is on the circle. (We assume each side includes a point on the circle other than the vertex.) A central angle intercepts a portion of the circumference of the circle termed an arc. An arc is a minor arc if its central angle is less than 180°. An arc is a major arc if its central angle is more than 180°. If an arc is exactly 180° it is termed a semicircle. Minor arcs are specified using two points (letters) under a curved line, whereas semicircles and major arcs are specified using three points. There is the following relationship between a central angle and its corresponding inscribed angle. An inscribed angle is half the measure of the central angle intercepting the same arc. If you know the measure of a central angle, you can also calculate its arc length as the proportion between this measure over 360° equal to this unknown arc length over the circumference. In Geometry, we never take anything for granted, but build knowledge by proving everything step by step. This tradition dates back to the ancient Greek Pythagorean school and was codified many years later by Euclid in a 13 volume set of books called Elements. Proofs often go awry because steps are not done in a proper order or steps are omitted. Another common mistake is to make improper assumptions or not properly support assumptions. Please become familiar with the following lists! The following items can be assumed from figures: - collinearity and betweenness of points on lines; - intersection of lines at a given point; and - points interior to, on, or exterior to an angle, n-gon, or circle. The following items cannot be assumed from figures: - collinearity of points not on lines; - whether or not lines are parallel or perpendicular, angles are acute or obtuse; - angle measure, and segment length, especially congruence of such. Always remember: Geometric diagrams are not drawn to scale. Although we presented many axioms about the real numbers previously in Numbers Lesson 13, we will review and summarized them again here since they apply to our geometry. |Postulates of Equalities, Inequality, and Operations:||Example: |Reflexive||x = x. |Symmetric||If x = y, then y = x. |Transitive||If x = y and y = z, then x = z. |Addition Property of Equality||If x = y, then x + z = y + z. |Multiplication Property of Equality|| If x = y, then x × z = y × z. |Equality to Inequality Property||If x and y are >0 and x+y=z, then z>x and z>y. |Substitution Property||If x = y, then x may be substituted for y anywhere. An easy way to differentiate among reflexive, symmetric, and the transitive property is to relate the alphabetic sequence RST with the numeric sequence 123 and note the number of variables used in each definition. Subtraction was defined via additive inverses and division was likewise defined via multiplicative inverses (Numbers Lesson 2). Hence the Addition and Multiplication properties above allow those operations as well. Also, similar properties for inequalities (< and > but not for the Transitive, Addition, and Multiplication (for z>0) postulates above hold as well. The next to last property given above is very similar to the Mastering algebra is an underlying theme of geometry, as are building vocabulary, developing important concepts, and sharpening critical thinking. Graphing lines and inequalities on the coordinate plane, finding slope, determining parallelism or perpendicular, are a major venue for developing this mathematical prowess. We presented before (Numbers Lesson 12) the definition of slope and various forms of linear equations, especially y = mx + b, where m is the slope and b is the y-intercept. If two lines are parallel, then their slopes are equal.| If two lines are perpendicular, then the product of their slopes is -1. Numbers whose product are -1 are termed negative reciprocals. If the slope of one line is -x and the slope of another is 1/x, then -x 1/x = -1. Perpendicular lines, of course, always form 90° angles. Other terms for perpendicular are: orthogonal, normal, and, of course, right. The transitivity property may be used to show two lines parallel to a third line are parallel to each other. This is often termed the Transitivity of Parallelism Theorem. If two lines are perpendicular to the same line, then the two lines are parallel to each other, if they are coplanar. If a line is perpendicular to a line that is parallel to another, then the first line is perpendicular to the other two, again if they are all coplanar. The Euclidean or Plane Geometry we are studying will make this basic assumption. Mathematical proofs are generally needed for the following three reasons. 1) People sometimes disagree because something obvious to one may not be so obvious to another. 2) Unexpected results can be verified. 3) Some statements resist proof for a long time. It is thus possible that it cannot be proved or just isn't true. Proof arguments can be written in many styles, just like there are many different styles of writing: essay, lists of concepts, etc. The two most common forms of proof are two column and paragraph. Examples of each have occurred already: compare the paragraph proof used in Numbers Lesson 3 to show that the primes form an infinite set with the two column proof used in Numbers Lesson 12 to derive the Traditionally (since 1900) Geometry has used the two column proof with the left column containing the statement and the right column containing the corresponding justification. It is something you must become familiar with. Proofs can also be direct or indirect as described in Numbers Lesson 5. They can also be based on inductive or deductive reasoning as well. The ancient Greeks established some simple rules for geometric construction which lead to some interesting mathematical development over the next several thousand years as discussed previously in Numbers Lesson 14. Thus the word construct means you are to follow these rules which restrict your methods considerable. Specifically, you may use only an unmarked straight-edge and a compass. These three rules are as follows. At other times you may be asked to draw a geometric figure. In such cases, you may also use a ruler and a protractor. Finally, there are times you may be asked to sketch a geometric figure. In such cases you are not required to use a straight-edge to ensure your lines are straight or a protractor to ensure your angles are properly measured. - Point rule: A point must either be given or be the intersection of figures that have already been constructed. These might be called legal points. - Straightedge rule: A straightedge can be used to draw the line through legal points A and B. - Compass rule:A compass can be used to draw an arc containing a second legal point B. Generally this means that a compass can be lifted keeping the same radius. The following basic construction algorithms must be More will be added later. To construct a segment perpendicular bisector, set the compass to a radius maybe 50% longer than the segment, place the point of your compass in turn on each end and draw an arc on each side of the segment such that these arcs intersect. (If in doubt, you can use complete circles. However, a very busy diagram results.) Connect these two points of intersection. These two points are clearly equidistant from both endpoints as are all points on the line connecting them. - Perpendicular bisector of a given segment. - Perpendicular bisector of a segment allowing access on one side only. - Regular (equilateral) triangle. - Regular hexagon. - Angle bisector. If the segment is placed against the edge of the paper, then only one point at a time can be determined by any given radius. You thus must reset the compass radius to find a second point on the same side of the line. To construct an equilateral triangle draw a line segment and mark off the desired length. Set the compass to that length then draw an arc from each end of the given segment so that they intersect at a point on one side. (Again, if in doubt complete circles can be used.) Draw segments connecting each endpoint to the intersection of these segments. Clearly all three sides are of equal length, the triangle is thus equilateral, and since the triangle is isosceles three different ways, by the base angle congruence theorem (yet to come) it is also equiangular. There are two major ways to construct a regular hexagon. The first method utilizes a circle with radius of the desired hexagon size length. Pick a point on the circle then mark the distance one radii away with your compass. As you procede around the cicle you will make exactly six equally spaced divisions. Connect these in order to complete your hexagon. The second method makes use of the fact that a regular hexagon can be triangulated into six equilateral triangles. Construct an equilateral triangle as above. Draw a circle centered on one vertex and proceed as above. To construct an angle bisector set the compass to almost any arbitrary but convenient distance. Place the point of the compass at the angle's vertex. Mark auxillary points on each of the angle's rays with the compass at the same distance from the vertex. From each of these two auxillary points draw another arc in the interior of the angle. (A new distance can be selected for these pair, separate from the distance out the arc, but this optional.) The intersection point formed by the arc from each auxillary point are on the line forming the angle bisector. Connect them and extend the line to complete the construction. The following link has a maze and a free downloadable Applet to draw Perfect mazes are not only solvable, but well-connected and contain no closed loops. Mazes and labyrinths have a long history, probably evolving from defense works against an enemy. Later they evolved into against evil spirits. often contain blind alleys and are technically different from mazes. In Greek mythology, the was confined in the famous labyrinth of Crete. Mazes of hedges or maize (corn) are still common in such gardens as the Versailles, Hampton Court in London, or out in Iowa. Perhaps next year we can take a local field trip to: Wick's Apple House Maize Maze or A maze having only one entrance and one exit can be solved by placing one hand against either wall and keeping it there as one traverses the maze. This will usually not be the shortest solution, but the exit can always be reached in this manner. For this to work in a labyrinth, there must be no closed loops. If it has no closed loops it is simply connected otherwise it is multiply connected. Mazes are also used to test learning behavior particularly in mice. Below/on the next page is a maze from your homework. For homework 3.7.
During the Pleistocene Epoch, over 15-thousand years ago, a huge ice sheet covered the ground all the way from Canada down to the Ohio River. On the edges of this ice sheet, great herds of giant mastodons, wooly mammoths and ground sloths were attracted to the warm salt springs that still bubble from the earth at Big Bone Lick State Park. The salty marsh that attracted these prehistoric visitors sometimes proved to be a fatal attraction. Animals became trapped and perished in what the early pioneers called "jelly ground," leaving skeletons and interesting clues about life in prehistoric Kentucky. The fossilized remains of these prehistoric animals were discovered in 1739 and displayed extensively at museums throughout the world. Notable Americans such as Thomas Jefferson and Benjamin Franklin personally examined the fossils, many of which are on display today at Big Bone Lick Museum. The scientific community recognizes the site as the "Birthplace of American Vertebrate Paleontology."
- 8 ½” x 11” piece of cloud paper (or you can use a piece of light blue construction paper and add your own clouds to it) - Brown and white construction paper - Images of furniture such as a bed, chair, dresser, or wardrobe, and objects that represent things you like. Or, images of things that are important to you, such as animals, books, sports equipment, favorite toys, as well as things that remind you of your favorite places or people. You can use photos or realistic images you find in magazines and catalogues. - A pencil or black pen - What objects do you see in this painting? - In your mind’s eye, change the size and scale of the objects in this painting. How does that affect your response to the work? - If you could enter this space, would you be standing inside or outside? What do you see that makes you say that? - Make an argument for why this painting is true to life. Make an argument for why it is not. - What might the objects in this work tell us about the artist or about the time and place in which he lived? - Cut your pieces of brown and white construction paper so that they each measure 2” x 11”. - Place them on top of each other so they are lined up. Cut the corners off both ends at an angle to make an isosceles trapezoid (the bottom edge should still measure 11” long and the diagonal edges should be equal in length). - Glue the brown trapezoid to the bottom of your cloud paper and the white trapezoid to the top. These will become the floor and ceiling of your room. - Using a pencil or a black pen, draw a vertical line to connect the inside corners of your floor and ceiling to create three walls in your room. - Cut out images from magazines, catalogues, or photos that represent things, people, places, and ideas that are important to you. Be sure to include at least one piece of furniture. - Select five or six objects to include in your room. - Experiment with the arrangement of your objects until you are happy with the composition. - Glue your objects in place. René Magritte originally called this painting The Clear Field but he changed the title when a friend suggested Personal Values. The word “values” can refer to the importance, worth, or usefulness of something, and it can also refer to someone’s judgement about what is important in life. Magritte wrote to his art dealer that this painting took him two months to paint. He considered every detail carefully and revised it until it had reached what he called “a state of grace.” - How do the different objects in your work reflect what is important to you? - Is the size and scale of your objects important? If so, why? - What inspired you to arrange the objects the way you did? - What title did you give to your artwork? Why did you select that title?
This week we celebrate Get Smart About Antibiotics Week. Antibiotics, or antimicrobials as they are also called, cure bacterial infections by killing bacteria or reducing their ability to reproduce so your own body’s immune system can overcome an infection. Penicillin was the first antibiotic, and was discovered in 1924 by Alexander Fleming. Since its widespread use, beginning in the 1940s, countless lives have been saved from devastating bacterial infections. Talk about a wonder drug! Improper use of antibiotics can have dangerous consequences. Since then, different types of antibiotics have been developed to combat many different types of infectious bacteria. Classes of antibiotics include penicillins, cephalosporins, macrolides, fluoroquinolones, aminoglycosides, and others. In each of these classes there are lots of different individual medications. (For example, cephalosporins include the drugs cephalexin, ceftriaxone, cefaclor, and others.) Some antibiotics are broad spectrum, which means they work on many different bacteria. Some are more narrow spectrum, used for specific bacteria. Antibiotics only work for bacterial infections … not viral infections. They are ineffective at killing viruses. Viral infections include colds, flu, runny noses, most coughs and bronchitis, and sore throats unless they are caused by strep. Sexually transmitted viruses include human papillomavirus (HPV), herpes simplex virus, and HIV. Continue reading
On the Mountain, it is cool enough to bake in the oven during the Summer months. This is also a great time to make something together as a family, like pizza. In our family, we have many different preferences when it comes to toppings. Some of us like a plain cheese, some of us like pepperoni, and some of us like to put everything on it to jazz it up. This gives our family an opportunity to talk about the different combinations of pizza we can create, if you have a finite number of ingredients. Let’s say you have cheese, pepperoni, olives, chicken, and tomatoes to choose from for toppings. You can easily make a cheese pizza, or a cheese and pepperoni pizza, or a cheese and pepperoni and olive pizza, or even a cheese and pepperoni and olive and tomato pizza. How many different varieties can you make? Thinking about the different varieties, or combinations, you can make is actually preparing the child to think and do the Mathematics. At this point, you can just ask them to think, estimate, and calculate the number of combinations you can make. They can draw pictures, or write each combination out. For this age, I would suggest talking it out and creating some of those combinations with the kids. What you are giving them is experience in working this out. Be mindful of how many ingredients you work with though. Start simple and then add more ingredients to choose from at another time. Most importantly, have fun! Something to Think About: When making pizza and talking about the number of pizzas you can make, just take about five minutes with your child. You can take more time if you want, but make it enjoyable, not like a quiz. There will be a question on whether or not double toppings count as a different pizza. The question is, by adding double toppings does this create a different tasting pizza? This is a great question to discuss with the kids! Keep in mind that you may have a child who can think this all in her/his head. You may have a child who enjoys drawing out the pictures, or listing the combinations. You may have the child who wants you to buy lots of pizza dough so that she/he can actually create all those pizzas to find out. For this child, keep the number of ingredients to three! We all learn differently and we all must honor the child on her/his learning. Think about sequencing. What do you do first when making a pizza? What do you do next? When do you add the cheese, or other toppings? This is another great way to have the little ones work on their sequencing and critical thinking. Make this experience fun. The end result is to create a great pizza, memories, and learn about Math!
Lesson: Two-Way Frequency Tables Mathematics • 8th Grade In this lesson, we will learn how to construct and use two-way tables. Which of the following is not true about two-way tables? Daniel wrote 20 letters and had no mistakes. Mason wrote 28 letters and had no mistakes. If a letter is selected at random, what is the probability that the letter has a mistake and was written by Daniel? The following table represents the data collected from 200 conference attendees of different nationalities: Find the probability that a randomly selected participant is an Arabic-speaking woman.
An Introduction to Archaeological Science Also Known As Archaeometry Archaeology, in itself, encompasses almost everything historic and is a contributing discipline of anthropology that studies the growth of our species. In the past century, the need to apply the more traditional ideas of science with the methodologies of archaeology has never been more omnipresent. There are seven key scientific techniques that make up the application of Archaeological Science: Science techniques include, but are not limited to, physical, chemical, biological and geological. Radiocarbon is the most popular and first science technique that was used in relation to archaeology. Radiocarbon is the dating of organic matter. Archaeology sites yield a tremendous amount of plant and animal matter and science plays a significant role in their study. The consideration of the physiology of plants and animals based on their microscopy and the biochemistry of the preserved tissues is a key element in the application of Archaeological Science. Materials Science and Artefacts The modern examination of the form and adornment of pottery, jewellery and stone tools includes the study of the composition of the artefacts. A chemical analysis of an artefact can yield the precise source of where the artefact originated. This application explores the actual structure of the archaeological site, preserving the site for future study and for possible museum exhibits. The science of studying the properties of materials located at the site and how they interrelate to their natural surroundings is crucial in the process of Archaeological Science. Stemming mainly from the oil and mineral industry, prospecting methods utilizing Archaeological Science is increasingly in demand. This discipline of science applies a variety of geophysical and geochemical techniques combined with aerial photography. The study of soil, decay, erosion and sediments is the backbone of Geoarchaeology and is a vital part of any archaeological site. The application of geology and soil science can reveal the history of the site. Statistics and Computing The first utilization of techniques used in statistics and computing was for the measurement of Egyptian skulls. The title of this scientific technique literally speaks for itself. It is the application of compiling data, statistics and any pertinent information into a database. Excellent resources that I have used include:
The Apollo 15 Learning Hub project was conceived in the Fall of 2016 and launched in late July, in commemoration of the 50th anniversary of the Apollo mission. The Emory Center for Digital Scholarship (ECDS) and the project’s partners developed the learning hub to assemble, preserve, and make available primary source records of Apollo 15 for research, education, and preservation, and as an example of a uniquely human endeavor. The Apollo 15 mission relied on a Lunar Module (LM), known as the “Falcon,” which detached from the Command and Service Module (CSM) and descended to the lunar surface. In a previous blog post, ECDS Visual Information Specialist Arya Basu gave a brief explanation of his work on the Hub. In this follow-up post, Dr. Basu provides more technical insight into the 3D lunar module simulation featured on the website (linked below). In recreating the Apollo 15 LM, Falcon, we had to capture the moment right after the Apollo 15 crew landed on the surface of the moon. To make the simulation as immersive and accurate as possible, we placed LM Cue Cards on the main control panel of the LM interior and executed detailed 3D modeling of the interior of the LM based on detailed anatomical research of the LM. Users can interact with the 3D model as events happened in 1971, with original video footage of the LM’s lunar descent preceding the simulation. One of the technical challenges of rendering a complex 3D environment like the Apollo 15 LM interior comes down to the graphical horsepower. In the early days of computer graphics (~1987), researchers argued that the real world would translate into 80 million polygons per picture in raw computation. Based on this number, if we want to have a photorealistic experience, numerically it would translate to a virtual environment with 800 million polygons per sec with an update rate of ten frames per second. In our current 3D model of the Falcon, the first-person camera renders approximately 850K triangles and roughly 800K vertices at any given moment during the interactive experience. Streaming this much information through the web—with the limited graphical resources available to a typical modern web browser—could stretch any computer thin. In order to preserve photorealism while balancing the real-time needs of our Apollo 15 LM interior, we implemented a few computer graphics optimization techniques in layers. First, we baked all scene lighting information, including shadow information and global illumination, using Shadowmask lighting mode. This technique eliminated our need to re-render all lighting information every update frame. Secondly, we used a level-of-detail (LOD) optimization technique to render out 3D geometry with varying levels of details. This approach helped eliminate the need to render the highest possible resolution of all scene geometry at all times, even when they are out of bounds of the interactive camera. In addition to the recent Apollo 15 Learning Hub website launch, our team has collaborated with the Emory Libraries Exhibitions Team to curate and display an Apollo 15 Learning Hub physical exhibit on the 3rd floor of the Woodruff Library on campus. Our exhibition includes several posters and displays, as well as a hemispherical dome where media is projected overhead. This set-up is a unique experiment into creating an immersive installation in an open space, offering the same immersive experience, at scale, for multiple audiences simultaneously in one location. The dome-shaped mediascape showcases an animated panorama of the lunar terrain with Col. David Scott (Apollo 15 Commander) examining a lunar boulder (see figure below). The panorama was collected by James B. Irwin (Apollo 15 Lunar Module Pilot) at Station 2, near St. George (a lunar crater in the Hadley-Apennine region). The large hill to the left of the rover is the summit of Mons Hadley Delta, in the northern portion of the Moon’s Montes Apenninus. In the panorama photo above, we stitched together the shot and added the Earth based on Commander David Scott’s personal description of the Earth’s position in the sky. The photograph of the Earth was taken by Command Service Module Pilot Alfred Worden. You can see the the Sun rising on the left, as the photos were taken during a lunar morning. Since that eventful lunar landing in 1971, the general field of computing has come a long way. In contrast to the Apollo Guidance Computer (AGC) of the day, which had limited RAM (random access memory) and ROM (read-only memory) capacities, modern computers allow us to more easily render the entire lunar terrain in 3D. The screenshot below shows a size comparison between the working memory footprint of the AGC versus a modern smartphone. The Apollo 15 Learning Hub highlights the Apollo-era technologies and scientific developments that enabled the mission’s success. We hope that the Hub allows users (scholars and enthusiasts alike) to understand more about the unique human endeavor that was the Apollo 15 lunar landing, 50 years later. “Baking” means saving information related to a 3D mesh into a texture file, freezing and recording the result of a computer process to save extra CPU cycles.
The abilities to learn, remember, evaluate, and decide are central to who we are and how we live. Damage to or dysfunction of the brain circuitry that supports these functions can be devastating, leading to Alzheimer’s, schizophrenia, PTSD, or many other disorders. Current treatments, which are drug-based or behavioral, have limited efficacy in treating these problems. There is a pressing need for something more effective. One promising approach is to build an interactive device to help the brain learn, remember, evaluate, and decide. One might, for example, construct a system that would identify patterns of brain activity tied to particular experiences and then, when called upon, impose those patterns on the brain. Ted Berger, Sam Deadwyler, Robert Hampsom, and colleagues have used this approach (see “Memory Implants”). They are able to identify and then impose, via electrical stimulation, specific patterns of brain activity that improve a rat’s performance in a memory task. They have also shown that in monkeys stimulation can help the animal perform a task where it must remember a particular item. Their ability to improve performance is impressive. However, there are fundamental limitations to an approach where the desired neural pattern must be known and then imposed. The animals used in their studies were trained to do a single task for weeks or months and the stimulation was customized to produce the right outcome for that task. This is only feasible for a few well-learned experiences in a predictable and constrained environment. The answer may be in an alternative approach based on enhancing flows of information through the brain. The importance of information flow can be appreciated when we consider how the brain makes and uses memories. During learning, information from the outside world drives brain activity and changes in the connections between neurons. This occurs most prominently in the hippocampus, a brain structure critical for laying down memories for the events of daily life. Thus, during learning, external information must flow to the hippocampus if memories are to be stored. Once information has been stored in the hippocampus, a different flow of information is required to create a long-lasting memory. During periods of rest and sleep, the hippocampus “reactivates” stored memories, driving activity throughout the rest of the brain. Current theories suggest that the hippocampus acts like a teacher, repeatedly sending out what it has learned to the rest of the brain to help engrain memories in more stable and distributed brain networks. This “consolidation” process depends on the flow of internal information from the hippocampus to the rest of the brain. Finally, when a memory is retrieved a similar pattern of internally driven flow is required. For many memories, the hippocampus is required for memory retrieval, and once again hippocampal activity drives the reinstatement of the memory pattern throughout the brain. This process depends on the same hippocampal reactivation events that contribute to memory consolidation. Different flows of information can be engaged at different intensities as well. Some memories stay with us and guide our choices for a lifetime, while others fade with time. We and others have shown that new and rewarded experiences drive both profound changes in brain activity, and strong memory reactivation. Familiar and unrewarded experiences drive smaller changes and weaker reactivation. Further, we have recently shown that the intensity of memory reactivation in the hippocampus, measured as the number of neurons active together during each reactivation event, can predict whether an the next decision an animal makes is going to be right or wrong. Our findings suggest that when the animal reactivates effectively, it does a better job of considering possible future options (based on past experiences) and then makes better choices. These results point to an alternative approach to helping the brain learn, remember and decide more effectively. Instead of imposing a specific pattern for each experience, we could enhance the flow of information to the hippocampus during learning and the intensity of memory reactivation from the hippocampus during memory consolidation and retrieval. We are able to detect signatures of different flows of information associated with learning and remembering. We are also beginning to understand the circuits that control this flow, which include neuromodulatory regions that are often damaged in disease states. Importantly, these modulatory circuits are more localized and easier to manipulate than the distributed populations of neurons in the hippocampus and elsewhere that are activated for each specific experience. Thus, an effective cognitive neuroprosthetic would detect what the brain is trying to do (learn, consolidate or retrieve) and then amplify activity in the relevant control circuits to enhance the essential flows of information. A device that makes the brain’s control circuits work more effectively offers a powerful approach to treating disease and maximizing mental capacity. Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology. Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels. A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
The student will learn about managing their income. - Identify the components of a budget. - Evaluate the relationship between budgets and goals. Standard 1.1 The student will describe the importance of earning an income and explain how to manage personal income using a budget. Suggested Grade Level 6th – 12th Grade Most of us work hard for our money. However, few of us can really explain what happens to it each month. When the paycheck arrives, it seems like a lot of money — but by the end of the month, we are digging in our closet to find an old coat to see if there is extra change in the pocket! What happens? The reality is this: it is not how much money we make; it is how much money we spend. If we do not learn how to control our spending, we will never have enough money to be happy or to pay our bills. Wealthy people who are careless with their spending will quickly become poor. On the other hand, moderate to low income people will become wealthy if they practice good money management skills.
Photo by Gautam Pandey/Photoshare Fogarty-funded researchers revealed the dengue virus is more prevalent than previously thought, with nearly half of the world’s population at risk for contracting this potentially lethal infection. The threat to human health from the mosquito-transmitted dengue virus is much worse than previously thought and improved surveillance is an essential part of tackling its spread and impact, according to a new study by an international team of researchers partly funded by Fogarty. Dengue fever, a sometimes disabling or deadly disease, occurs at least three times more frequently than previously estimated, researchers said in their Nature paper, recently published online. They put the infection number at 390 million cases annually, versus the WHO's estimate of 50-100 million. Having a clear handle on the risk is important for policymakers and scientists tackling the disease, which the WHO describes as fast emerging and pandemic-prone. The researchers incorporated more data than previous investigations. They analyzed dengue diagnoses records, constructed a model to map the global distribution of risk and included longitudinal information from dengue cohort studies and adjusted population census data. Although the bulk of dengue infections go unnoticed, hosts still serve as reservoirs and can help spread the disease, so the researchers included all cases of dengue that disrupted daily routine, rather than only treatment-seeking cases. The study yielded a global infection estimate of 96 million in 2010. Asia accounted for 70 percent - with about 34 percent of those in India - and Africa and the Americas each had about 15 percent. "Our approach provides new evidence to help maximize the value and cost-effectiveness of surveillance efforts," the researchers wrote in their report. "Knowledge of the geographical distribution and burden of dengue is essential for understanding its contribution to global morbidity and mortality burdens, in determining how to allocate optimally the limited resources available for dengue control and in evaluating the impact of such activities internationally." Dengue's main vector is the striped Aedes mosquito, particularly Aedes aegypti. The insects are difficult to control - feeding during the daytime, breeding in very small quantities of water and thriving with today's climate disturbance, rising urbanization and growing human populations. "We predict dengue to be ubiquitous throughout the tropics, with local spatial variations in risk influenced strongly by rainfall, temperature and the degree of urbanization," the authors noted in their report. Along with more frequent infection comes a higher threat of serious illness - dengue hemorrhagic fever or dengue shock syndrome. This is because the virus has an unusual impact on the human immune system. Four versions exist and although humans develop lifelong immunity to the initial virus, subsequent infection by another type can disrupt the immune system reaction and make serious disease more likely. Scientists don't understand exactly why. Currently, no therapeutics or vaccines exist and mosquito control measures have failed to rein in dengue's spread. Researchers are stepping up efforts to find interventions, including vaccines, treatments and mosquito control, and in the meantime are determining effective ways to handle any outbreaks. The study was supported by the International Research Consortium on Dengue Risk Assessment, Management and Surveillance, the Wellcome Trust, U.S. Department of Homeland Security, Li Ka Shing Foundation and Fogarty.
Introduction to Windows Operators 5 9 43 1 true false. These random numbers and text don’t make any sense, do they? No, they don’t. That’s because they lack operators. Any meaningful expression is a combination of variables and operators. An operator determines how the variables are connected through each other and how they would contribute to the end result. 5 + 9 – 43 < 1 ? true: false. Now it makes some sense. So let’s snorkel through the world of operators in Windows. Classification of Windows Operators These Windows Operators are broadly classified into three types. This classification is done based on the number of variables or operands an operator requires. The three types are: - Unary Operators - Binary Operators - Ternary Operators 1. Unary Operators: They require a single operand. E.g. Prefix and Postfix operators, Shorthand operators, Negation Operator etc 2. Binary Operators: They require two operands to calculate the result. E.g. Arithmetic operators, logical operators etc. 3. Ternary Operators: They require three operands. E.g. Ternary Conditional operator Types of Windows Operators The various types of windows operators, based on their functionality are: 1. Basic Arithmetic Operators These windows operators perform mathematical calculations. Plus operator (+): Adds or concatenates the two operands. - Sum of two integers: 1+3 results in 4 - Sum of two floating point numbers: 9.8+0.4 results in 10.2 - Concatenation of two strings: “Hello”+”World” results in “Hello World” Minus Operator (-): Subtracts the second operand from first. Doesn’t work on strings. - Subtraction of two integers: 5-4 results in 1 - Subtraction of two floating point numbers: 4.1 – 4.6 results in -0.5 Multiplication Operator (*): Multiplies the two operands. - Multiplication of two integers: 9*5 results in 45 - Multiplication of two floating point numbers: 1.1*2.3 results in 2.53 Division Operator (/): Divides the first operand by the second and returns the quotient as the result. The remainder is discarded. Some advanced languages, however, do not discard the remainder and keep dividing until a pre-set number of precision points is reached. - Division of two integers: 45/11 results in 4 - In advanced languages: 45/11 results in 4.090909 Modulus Operator (%): Divides the first operand by the second and returns the remainder as the result. The quotient is discarded. Doesn’t work on floating point numbers. - Modulus of two integers: 45/11 results in 1 2. Assignment Operator (=) Assigns the result calculated in the right-hand side of the operator (RHS) to the left-hand variable (LHS). The left of the operator should always be a variable and not a constant/expression. - x = 5, assigns value 5 to x. - 5 = x is invalid as left-hand side is a constant. - y = x*4 calculates x*4 and assigns to y. Thus, y now holds the value 20 - x*4 = y is invalid as the left-hand side is an expression. 3. Comparison Operators They compare the value of the first operand with that of the second operand and returns either true or false. These are less than (<), greater than (>), less than or equal (<=), greater than or equal (>=), equal (==), not equal (!=). - 61>45, returns true. - 3==3, returns true. 4. Prefix and Postfix Operators These windows operators increment or decrement the value of an operand by 1. They work only on integers. x++, x is now 6 –x, x is now 5 again Seems simple, right? There is a very significant difference in the functioning of the two operators. Prefix operators change the value of the operand before evaluating the expression, whereas the postfix operator changes the value after the expression has been evaluated. - x = 5 print(x++), this will print 5 and then change the value of x to 6 print(++x), this will increment the value from 6 to 7 and then print 7. 5. Shorthand Operators These windows operators are a combination of two operators. The result is calculated using the existing value of the operand and assigned back to itself. They help minimize the lines of code written. The most common shorthand operators are: - +=: This is equivalent to addition and assignment. - -=: This is equivalent to subtraction and assignment. - *=: This is equivalent to multiplication and assignment. - /=: This is equivalent to division and assignment. E.g. – x+=5, is equivalent to x=x+5. 6. Logical Operators Logical operators are mainly used to control the program flow. Usually, they help the compiler on which path to follow based on the outcome of a decision. They always result in boolean values Logical AND (&&): Returns true if the conditions on both left and right side of the operator are true, otherwise returns false. - (2>3) && (4<5) returns false. Reason, 2 is not greater than 3 - Boolean b1 = true Boolean b2 = true b1 && b2 returns true. Logical OR (||): Returns true if any of the operands is true, otherwise returns false. - (2>3) || (4<5) returns true - Boolean b1 = false Boolean b2 = false b1 || b2 returns false. Logical NOT / Negation (!): Inverses the result of the operand i.e. true becomes false and false becomes true. - ! (2>3) returns true - ! (2>3) && (4<5) returns true. Reason -! (2>3) results in true. 7. Bitwise Operators Bitwise operators are a special category of operators as they do not operate in a conventional way. While all other operators operate on bytes, bitwise operators operate on bits. Don’t panic. They may sound tough but are easy to understand through examples. Let us assume we have two numbers 2 and 4. Their respective binary conversions would be 0010 and 0100. Since 1 byte contains 8 bits, we convert them to 0000 0010 and 0000 0010. - Bitwise AND (&): 2 & 4 results in 0000 0000 which is simply 0 - Bitwise OR (|): 2 | 4 results in 0000 0110 which is 6 - Bitwise NOT (~): ~2 results in 1111 1101 which is -125 most significant bit are the sign bit Note: Bitwise operators are a vast topic in themselves and they play a key role in the communication industry. It is recommended to deep dive in bitwise operators for a better understanding. 8. Ternary Operator The ternary operator is a shorthand operator for a logical if and else program flow. It evaluates the expression on the left of the question mark (?) and based on the result (true/false) the operations on the left and right of the colon (:) are performed. E.g. – (condition)? (operation if true): (operation if false) - (5>9) ? (print true): (print false) false is printed. 9. Operator Precedence The precedence of operators is as follows (highest to lowest priority): - Prefix and Postfix operators - Multiplication, Division, Modulus - Addition, Subtraction - Bitwise Operators - Logical Operators (some logical operators take higher precedence than bitwise operators. Learn more when you deep dive in bitwise operator section.) - Ternary Operator - Assignment, Shorthand operators This has been a guide to Windows Operator. Here we have discussed the different type of windows operators with examples. You can also go through our other suggested articles to learn more –
Language is regarded, at least in most intellectual traditions, as the quintessential human attribute, at once evidence and source of most that is considered transcendent in us, distinguishing ours from the merely mechanical nature of the beast. Even if language did not have the sacrosanct status it does in our conception of human nature, however, the question of its presence in other species would still promote argument, for we lack any universally accepted, defining features of language, ones that would allow us to identify it unequivocally ours from other species and contention over the crucial attributes of language are responsible for the stridency of the debate over whether nonhuman animals can learn language. Aping Language is a critical assessment of each of the recent experiments designed to impact a language, either natural or invented, to an ape. The performance of the animals in these experiments is compared with the course of semantic and syntactic development in children, both speaking and signing. The book goes on to examine what is known about the neurological, cognitive, and specifically linguistic attributes of our species that subserve language, and it discusses how they might have come into existence. Finally, the communication of nonhuman primates in nature is assayed to consider whether or not it was reasonable to assume, as the experimenters in these projects did, that apes possess an ability to acquire language. What people are saying - Write a review We haven't found any reviews in the usual places. History of the apelanguage projects THE ARTIFICIALLANGUAGE PROJECTS The Lana project The Sarah project APES AND LANGUAGE ONTOGENY APES AND LANGUAGE PHYLOGENY ability acoustical acquired American Sign Language analysis angular gyrus animals ape-language projects apes apple attributes behavior Bellugi brain categorical perception child chimpanzee cognitive combinations communication consistent constructions context contrast correct demonstrated discrete signals distinct elements encode English errors evidence evolution example experimenters experiments fact Fouts function Gardners gestures gorilla Gouzoules grammatical handshapes hominids human language indicate inflection interpretation invented involved Kanzi keyboard Koko Koko's Lana Lana's Language Acquisition learned lexigrams limbic system linguistic macaques Marler monkeys morphemes nature Nim's nonhuman object observed origin of language pattern Patterson percent performance phonemes places of articulation Premack presented Press primate production question reference referential regard relationship request response Rumbaugh Sarah Savage-Rumbaugh semantic relations sentence sequence Sherman and Austin sign order signals species speech stimuli strings structure suggests symbols syntactic syntax Terrace tests trainer utterances verb vocabulary vocal Washoe Washoe's word order Yerkish
The cold is a common infection of the upper respiratory tract. Although many people think you can catch a cold by not dressing warmly enough in the winter and being exposed to chilly weather, it’s a myth. The real culprit is one of more than 200 viruses. The common cold is spread when you inhale virus particles from an infected person’s sneeze, cough, speech, or loose particles from when they wipe their nose. You can also pick up the virus by touching a contaminated surface that an infected individual has touched. Common areas include doorknobs, telephones, children’s toys, and towels. Rhinoviruses (which cause the most colds) can live for up to three hours on hard surfaces and hands. Most viruses can be classified into one of several groups. These groups include: - human rhinoviruses - parainfluenza viruses Some other common cold culprits have been singled out, such as the respiratory syncytial virus. Still others have yet to be identified by modern science. In the United States, colds are more common in the fall and winter. This is mostly due to factors such as the start of the school year and the tendency for people to remain indoors. Inside, air tends to be drier. Dry air dries up the nasal passages, which can lead to infection. Humidity levels also tend to be lower in colder weather. Cold viruses are better able to survive in low humidity conditions. This group of viruses — of which there are more than 100 types — is by far the most common identified cause of colds. The viruses grow best at the temperature inside the human nose. Human rhinoviruses (HRVs) are highly contagious. However, they rarely lead to serious health consequences. Recent research has found that HRVs manipulate genes and it is this manipulation that brings about an overblown immune response. The response causes some of the most troublesome cold symptoms. This information could lead scientists to important breakthroughs in the treatment of the common cold. There are many varieties of coronavirus that affect animals, and up to six can affect humans. This type of virus typically causes mild to moderate upper SARS (severe acute respiratory syndrome). Other viruses that may cause a cold include: - human parainfluenza virus (HPIV) - respiratory syncytial virus (RSV) These three groups of viruses typically lead to mild infections in adults, but may cause severe lower respiratory tract infections in children, older adults, and those with weakened immune systems. Premature babies, children with asthma, and those with lung or heart conditions are at greater risk for developing complications such as bronchitis and pneumonia. One strand of HPIV called HPIV-1 causes croup in children. Croup is characterized by the loud, startling sound that is produced when the infected individual coughs. Crowded living conditions and stress increase the risk of respiratory disease. For instance, the found that military recruits are at greater risk for contracting adenoviruses that develop into respiratory illnesses. The common cold will usually run its course without complication. In some instances it may spread to your chest, sinuses, or ears. The infection can then lead to other conditions such as: Ear infection: The main symptoms are earaches or a yellow or green discharge from the nose. This is more common in children. Sinusitis: It occurs when a cold does not go away and stays for long periods of time. Symptoms include inflamed and infected sinuses. Asthma: Breathing difficulty and/or wheezing that can be triggered by a simple cold. Chest infection: Infections can lead to pneumonia and bronchitis. Symptoms include lingering cough, shortness of breath, and coughing up mucus. Strep throat: Strep is an infection of the throat. Symptoms include a severe sore throat and sometimes a cough. For colds that do not go away, seeing a doctor is necessary. It’s important to seek medical attention if you have a fever higher than 101.3°F, a returning fever, trouble breathing, persistent sore throat, sinus pain, or headaches. Children should be taken to the doctor for fevers of 100.4°F or higher, if they have cold symptoms for more than three weeks, or if any of their symptoms become severe. There is no set cure for the cold, but combining remedies may alleviate symptoms. Over-the-counter cold medicines usually combine painkillers with decongestants. Some are available individually. These include: - Pain relievers such as aspirin and ibuprofen are good for headaches, joint pain, and fever reduction. - Decongestant nasal sprays such as Afrin, Sinex, and Nasacort can help clear the nose cavity. - Cough syrups help with persistent coughs and sore throats. Some examples are Robitussin, Mucinex, and Dimetapp. Alternative medicine is not proven to be as effective at treating colds as the above methods. Some people do find relief in trying it. Zinc can be used most effectively if taken 24 hours after the first symptoms. Vitamin C, or foods rich with it (like citrus fruits), are said to boost the immune system. And echinacea is often thought to provide the same immune system boost. During a cold, it is suggested that you get extra rest and try to eat a low-fat, high-fiber diet. You should also drink a lot of liquids. Other tips for home care: - The warmth and liquid of chicken soup can help sooth symptoms and congestion. - Gargling with salt water may relieve a sore throat. - Cough drops or menthol candies can help with sore throats and cough. The candies provide a coating over the throat that soothes inflammation. - Controlling your home’s temperature and humidity can prevent the growth of bacteria.
Flame photometry, also referred to as 'flame atomic emission spectrometry' is a quick, economical and simple way of detecting traces of metal ions, primarily Sodium, Potassium, Lithium, Calcium, and Barium, in a concentrated solution. The process is an extension of the principles used in a flame test, with the main differences having more precision in the results, and the use of more advanced technology. This report focuses on the theory, applications, limitations and analysis of Flame Photometry. 2.0 EXPLANATION OF THEORY 2.1 A flame photometer is an instrument used for measuring the spectral intensity of lines produced by metals present in ionic compounds. "Measuring the flame emission of solutions containing the metal salts is done to perform a quantitative analysis of these substances" (Internet 1). 2.2 The electrons of an atom exist in different energy levels. When energy is added to an atom in the form of light/heat/electrical energy, the atom becomes excited and electrons begin to 'jump' to higher energies (Figure 1). There are two states that an atom can exist - an excited state and a ground (natural) state. The ground state of an atom is when the electrons are in their lowest energy level. The use of a flame photometer "relies on the principle that a metal salt drawn into a non-luminous flame will ionise, absorb energy from the flame and then emit light of a characteristic wavelength as the excited atoms decay to the unexcited ground state" (Internet 10). 2.3 Flame photometers work by vaporizing metallic salts in a very hot flame: when a solution of a salt, such as sodium is sprayed into the flame, the elements in the compound are partially converted into their atomic state. "Due to the heat energy of the flame a very small proportion of these atoms...
The Crown in Canada A Constitutional Monarchy Canada is a monarchy; its head of state is the Sovereign. Under Canada's Constitution, the country is governed by democratically elected federal, provincial, and territorial governments, who carry out their duties under the authority of the Crown. The roots of this system of government run through Canada's written history. They began with the establishment of the Crown in Canada with the first permanent French settlements in northeastern North America in the early 17th century. Known as New France, the colony and its inhabitants existed under the sovereignty of the French kings through the rule of governors. In 1763, many of France's North American possessions were ceded to the British king, George III, in the Treaty of Paris. New France then became part of British North America. Like the French Crown had been, the British Crown was central to the governance of these colonies. It differed, however, from the absolute monarchy of the French ancien régime. It had evolved into a system of parliamentary supremacy and responsible government, a model that would eventually guide Canada's development. In the early years of British rule, however, the colonies were entrusted to governors who wielded considerable power and authority. The model of responsible government was established in British North America in the middle of the 19th century. Nova Scotia was the first to adopt it, in 1848, followed soon after by other colonies in what would become Canada at Confederation in 1867. The framework of responsible government in a federal system was the achievement of the Fathers of Confederation in their negotiation of a united Dominion of Canada within the British Empire in the 1860s. Remaining central to this government, however, was the Crown, and the new country stayed true to its history as a constitutional monarchy. Indeed it was Queen Victoria who, in the 30th year of her reign, assented to the British North America Act (now known as the Constitution Act, 1867). The Crown served to reinforce Canada's identity as the country continued to evolve over some tumultuous decades. After coming of age on the battlefields of France and Flanders in the First World War, Canada saw its full legal autonomy formalized in the Statute of Westminster in 1931. Thereafter, Canada emerged as a distinct entity within the Commonwealth with a unique relationship to the Crown. A Queen of Canada Upon the death in 1952 of King George VI, his daughter Elizabeth succeeded to the throne. She was proclaimed Queen of Canada, the first monarch recognized by this specific title. It was another step in asserting Canada's autonomy under its enduring framework of constitutional monarchy. Her Majesty's relationship to Canada began with her first Royal Tour of the country in 1951 as Princess Elizabeth. On that tour, as well as the 22 that have followed, the Queen has established ever-deeper ties with the Canadian people. She has visited every province and territory and has often been heard to refer to Canada as "home." Her Majesty has taken part in Canada's growth as a nation. In 1957 (and again in 1977), she opened Parliament by personally delivering the Speech from the Throne, the only sovereign of Canada to do so. She inaugurated the new St. Lawrence Seaway in 1959 with President Dwight D. Eisenhower and visited him in the United States, making the first foreign visit ever made by a Queen of Canada. In 1967 she celebrated the country's centennial on Parliament Hill and attended Expo '67 in Montreal. Accompanied by her family, she opened the Montreal Olympic Games in 1976. Among other tours, she returned in 1977 and in 2002 to celebrate her Silver and Golden Jubilees. She visited the newly created territory of Nunavut in 2002, and she celebrated the centennial of Alberta and Saskatchewan with their people in 2005. Events during her recent tours such as a visit to the First Nations University of Canada in 2005, the laying of the cornerstone of the Canadian Museum for Human Rights in 2010, and the celebration of the Centennial of the Royal Canadian Navy, also in 2010, show Her Majesty's involvement in issues important to Canadians. In 1982, the Queen took part in the most significant event in Canada's constitutional history since her great-great-grandmother assented to the British North America Act. On April 17, Queen Elizabeth signed the Proclamation of the Constitution Act, 1982, completing a process that brought Canada's Constitution, formerly a statute of the British Parliament, under complete Canadian control. The Constitution could now be amended in Canada without reference to the British Parliament. The stroke of the Queen's pen also gave Canada the Canadian Charter of Rights and Freedoms. (It is worth noting that the table upon which the Proclamation was signed is kept in the offices of the Speaker of the Senate as a treasured element of Canada's history.) Preceded by the Canadian Bill of Rights of 1960, an early attempt to codify human rights in federal law, the Charter of Rights and Freedoms is an important statement of Canada's values and the guiding principle behind all Acts of Parliament. Interestingly, this constitutional milestone was marked in the 30th year of Queen Elizabeth's reign, the same year of Queen Victoria's reign in which she assented to the country's creation. The members of the Royal Family also play an active role in the life of Canadians as representatives of the Crown. They periodically tour Canada, connecting with Canadians and their current affairs. Many members of the Queen's family serve as Colonels-in-Chief of regiments of the Canadian Forces. They are also very much involved in promoting the values of duty and service that are an intrinsic part of the Canadian identity. The Queen, for example, is the Sovereign of the Order of Canada, part of her awards system that recognizes the good achieved by Canadians. Members of the Royal Family serve as patrons of other awards, such as the Duke of Edinburgh's Award Program for young people, and of charity work, such as that carried out by The Prince's Charities, a network of charities of which the Prince of Wales is patron or president. In these ways, the Crown and its representatives are truly woven into the fabric of Canadian society.
To query the records of a database, you can use Boolean algebra combined with some operators. Boolean Algebra works on logical statements. A statement is a sentence that acknowledges a fact or a possibility. That fact is eventually evaluated as being true or false. There are three main types of logical statements: These are the types of evaluations you make when analyzing the records of your database. In Lesson 2, we saw how to create a table and how to populate it with a few records. In Lesson 3, we saw how to present the data of a table to a user but through a form. In that lesson, we presented all the records of a table to a user. A query is a technique of using all data or only selecting a few records to present to the user. Data used on a query can originate from a table, another query, or a combination of tables and/or queries. The universal or the most popular language used to query a database is called the Structured Query Language and abbreviated SQL. Like most other database environments, Microsoft Access supports SQL. Like every computer language, the SQL comes with its syntax, vocabulary, and rules. The SQL is equipped with keywords that tell it what to do and how to do it. The most fundamental word used in SQL is called SELECT. As its name indicates, when using SELECT, you must specify what to select. There are various ways you create a query in Microsoft Access. The Query Wizard offers the simplest approach to creating a query where in step by step you specify the data that the query will make available. The wizard presents the tables that are part of the database and you select which fields you need. Such a query is called a Select Query. To use the Query Wizard, on the Ribbon, you can click the Create tab and, in the Queries section, click Query Wizard . This would display the New Query dialog box: On the New Query dialog box, you can click Simple Query Wizard and click OK. The first page of the Simple Query Wizard expects you to choose the origin of the query as a table or an already created query. When creating a query, in reality you create a SQL expression but Microsoft Access takes care of creating a SQL statement behind the scenes for you. As mentioned already, when creating a query, you must select a table. In SQL, this is equivalent to the following formula: SELECT What FROM WhatObject; The FROM keyword is required. The WhatObject of our formula is the name of the table or query you would select from the wizard. An example would be: SELECT What FROM Employees; The SQL is not case-sensitive. This means that SELECT, Select, and select represent the same word. To differentiate SQL keywords from "normal" language or from the database objects, it is a good idea to write SQL keywords in uppercase. A SQL statement must end with a semi-colon. The What factor of our formula represents the field(s) you select from a table or query. Query design consists of selecting the fields that would be part of a query. We previously learned that fields could be added to a query by using the Query Wizard. Fields can also be added by designing a query. To proceed with this approach, the query should be displayed in Design View. You can also write a SQL statement to select the fields for a query: This would display the Show Table dialog box that allows you to specify the table or query that holds the fields you want to use in the intended query. When a query is displaying in Design View, the Design tab of the Ribbon displays the buttons used for a query: When a query is displaying in Design View, to access its code: When starting a new query, you must specify where data would come from. If you are manually writing your SQL statement, from our previously seen syntax, you must replace the WhatObject factor by the name of a table or query. If you are visually creating the query, the Design View displays a list of already existing tables in the Tables tab, and a list of already created queries in the Queries property page: A simple query can have its data originate from a single table. In the Show Table dialog box, to choose the table that holds the information needed for this query, you can click that table and click Add. You can also double-click it. After selecting a table, some tables, a query, or some queries, you can click the Close button of the Show Table dialog box. If the Show Tables dialog box is closed or for any reason you want to display it: The Query window is presented like a regular window. If the database is set to show overlapped windows, its title bar displays its system button on the left section. This can be used to minimize, maximize, restore, move, resize, or close the window. Like all Microsoft Access window objects, the title bar displays a special menu when right-clicked: The right section of the title bar displays the classic system buttons of a regular window. To create the fields for a query, you use the table(s) or query(queries) displayed in the upper section of the window. Once you have decided on the originating object(s), you can select which fields are relevant for your query: To make a field participate in a query, you have various options: In the SQL, to add one column to a statement, replace the What factor of our formula with the name of the column. An example would be: SELECT FirstName FROM Employees; If you want to include more than one field from the same table, separate them with a comma. For example, to select the first and last names of a table named Employees, you would write the statement as follows: SELECT FirstName, LastName FROM Employees; To include everything from the originating table or query, use the asterisk * as the What factor of our formula. Here is a statement that results in including all fields from the Employees table: SELECT * FROM Employees; The name of a field can be delimited by square brackets to reduce confusion in case the name is made of more than one word. The square brackets provide a safeguard even if the name is in one word. Based on this, to create a statement that includes the first and last names of a table named Employees, you can write it as follows: SELECT [FirstName], [LastName] FROM [Employees]; To identify a field as belonging to a specific table or query, you can associate its name to the parent object. This association is referred to as qualification. To qualify a field, type the name of the object that is holding the field, then add a period followed by the name of the field. The basic syntax of a SELECT statement would be: SELECT WhatObject.WhatField FROM WhatObject; Imagine you want to get a list of people by their last names from data stored in the Employees table. Using this syntax, you can write the statement as follows: SELECT Employees.LastName FROM Employees; SELECT [Employees].[LastName] FROM [Employees]; In the same way, if you want to include many fields from the same table, qualify each and separate them with a comma. To list the first and last names of the records from the Employees table, you can use the following statement: SELECT Employees.FirstName, Employees.LastName FROM Employees; SELECT [Employees].[FirstName], [Employees].[LastName] FROM [Employees]; If you want to include everything from a table or another query, you can qualify the * field as you would any other field. Here is an example: SELECT Employees.* FROM Employees; SELECT [Employees].* FROM [Employees]; You can also use a combination of fields that use square brackets and those that do not: SELECT FirstName, [LastName] FROM Employees; The most important rule is that any column whose name is in more than one word must be included in square brackets. You can also use a combination of fields that are qualified and those that are not SELECT [Employees].[FirstName], LastName FROM [Employees]; In the Navigation Pane, a query is represented by an icon and a name. Executing a query consists of viewing its results but the action or outcome may depend on the type of query. To view the result of a query: If you manually write a SQL statement and want to execute it, change the view to Datasheet View. Sometimes, the idea of using a query is to test data or verify a condition. Therefore, a query can provide just a temporary means of studying information on your database. When performing the assignment or when testing values before isolating an appropriate list, you can add, insert, delete, replace or move fields at will. We have already covered different techniques of adding or inserting fields, from the Query Wizard or from a list of fields in the top section of the query window. Some other operations require that you select a column from the bottom section of the query window: Since selecting a column in the Query window is a visual operation, there is no equivalent in SQL. As seen above, a query is built by selecting columns from the originating list and adding them. If you do not need a column anymore on a query, which happens regularly during data analysis, you can either delete it or replace it with another column: To remove a column from a SQL statement, simply delete it. An example would be: SELECT EmployeeName, DateHired, Title FROM Employees; SELECT EmployeeName, Title FROM Employees; To replace a column, click the arrow on the combo box that displays its name and select a different field from the list: To replace a column from a SQL statement, simply change its name to the name of another existing column of the same table or query. An example would be: SELECT EmployeeName, DateHired, Title, Salary FROM Employees; SELECT EmployeeName, DateHired, EmailAddress, Salary FROM Employees; Columns on a query are positioned incrementally as they are added to it. If you do not like the arrangement, you can move them and apply any sequence of your choice. Before moving a column or a group of columns, you must first select it. Then: Since moving a column in the query window is a visual operation, there is no equivalent in SQL. Otherwise, in the SQL statement, you can either edit the statement or delete the field in one section to put it in another section. An example would be: SELECT EmployeeName, DateHired, EmployeeNumber, Salary FROM Employees; SELECT EmployeeNumber, EmployeeName, DateHired, Salary FROM Employees; A query uses the same approach of a table to present its data: it is made of columns and rows whose intersections are cells. Although the main purpose of a query is to either prepare data for analysis or to isolate some fields to make them available to other database objects, as done on a table, data can be entered in a query. Data entry on a query is done the same as on a table: data is entered into cells. The Enter, Tab and arrow keys are used with the same functionality. Like a table, a query provides navigation buttons on its lower section, allowing you to move to the first, the previous, the next, the last or any record in the range of those available. Like tables, queries provide you with a fast means of printing data. Once again, this should be done when you need a printed sheet but not a professionally printed document. Data printing on a query is done with the exact same approaches and techniques as for a table. When creating a form or a report that would be used to present data to the user, we selected a table as the source of data. In the same way, if you have created a query that holds some records, you can use it as the base of data for a form or report. If you delete the form or report, the query would still exist because it is a separate object, but it would lose its data holder. Instead of first formally creating a query before using it for a form or report, you can select the data from a table or a query and use it on the form or report. Behind the scenes, Microsoft Access would create the SQL expression that is directly applied to the form or report. Such a query is not saved as an object. This technique is used when you do not need a formal query as the base of data for a form or report. There are various techniques you can use to create a query specifically made for a form or a report: After creating the form or report, if you delete it (the form or the report), the expression would be lost also.
Wairarapa Moana literally means “sea of glistening water” and was among the first areas settled in New Zealand with sites dating back some 800 years. Fish and waterfowl were plentiful, but the major draw card was tuna – the native freshwater eel. Tuna could be caught in vast quantities during their seasonal migration to the sea, and the catch could be dried for storage or trading. Seasonal eeling settlements dotted the edge of Wairarapa Moana with several permanent settlements on the surrounding higher ground. In the 1840s sheep farmers started arriving in Wairarapa and began leasing land from Maori landowners. Leasing was soon made illegal by the Crown, which was only interested in purchasing land from Maori and selling it to settlers for a profit. Land was sold, but Maori retained the flood-prone areas crucial for eel fishing, and the lakes themselves. When the outlet to the sea was blocked, the lakes and wetlands filled up. Between February and April this process was called the hinurangi, which was important for tuna preparing to migrate over two thousand kilometres into the South Pacific to breed. There were several decades of disagreement between Maori fishers and Pakeha farmers over opening the mouth of Lake Onoke. One wanted high water for fishing and the other dry pasture for farming. This was resolved in 1896 when the title of the lakes moved into Crown ownership. The transaction is now subject to a Treaty of Waitangi claim. Farming prospered on the fertile land around the lakes, although seasonal flooding still hampered production. This problem was tackled throughout the 20th century with drainage and stop banks, although large floods could still wreak havoc. Several generations of government engineers had pondered the flooding problem. In the 1960s a project got underway to divert the Ruamahanga from flowing into Lake Wairarapa and connect it directly with Lake Onoke, enabling flood waters to escape quickly. This was finished in the 1970s allowing 40,000 hectares to be farmed more intensively. Since then many sheep and beef farms around Wairarapa Moana have been converted into more profitable dairy farms. These farms are the economic powerhouse for South Wairarapa District. Wairarapa Moana today remains a richly diverse and wild place as well as being severely compromised by many threats to its ecology and water quality. The third largest lake in the North Island, it is home to more than a hundred native and exotic bird species, rare plants and native fish species and is still revered by Maori as a source of wellbeing for the region.
Earlier we learned that the atom was comprised of a very small positively charged core of protons and neutrons surrounded by a much large "cloud" of orbiting electrons. While chemistry doesn't usually involve changes in the number of protons or neutrons in an atom, changes in the number of electrons in an atom is central to the science of chemistry. If electrons are removed or added to a neutral atom, a charged particle called an ion is formed. There are two types of ions: - Cation - a positively charged ion - Anion - a negatively charged ion For example, a neutral sodium atom has a nuclear charge of +11 and contains 11 electrons. If we strip off one electron we form a sodium cation: This process can also be represented in short-hand notation. A neutral chlorine atom has a nuclear charge of +17 and contains 17 electrons. If we add one electron we form a chlorine anion: Another example. Zn likes to lose 2 electrons to make a divalent cation: You can use the periodic table to predict how many electrons an element will lose or gain when it becomes an ion. For example, here are the most stable ionic charges on monoatomic ions: Generally, Aluminum likes to form the cation Al3+ and Zinc likes to form the cation Zn2+. Here is a "rough rule" you can use to figure out how many electrons an element will gain or lose: Elements tend to gain or lose electrons to achieve the same number of electrons as the nearest noble gas. For groups in the middle of the periodic table, it is not as simple. After you learn about quantum mechanics, however, you will have a better idea of how to predict the stable ion charges for these groups. A final note: you will often hear chemists use the term proton interchangeably with hydrogen cation, H+. Can you explain why?
88.06.01 Mankind’s Fascination with Flight This unit covers the science of aerodynamics. Although the in depth study of aerodynamics may not apply to the middle or lower grades, this unit offers a wealth of background information for the classroom teacher covering air travel and flight. Grades 2 through 5 will find this unit helpful. 88.06.02 Paper Airplanes The science of paper airplanes is the main topic of this unit. This unit is very adaptable to Kindergarten through grade 5 lessons on flight and air travel. Lesson plans and bibliographical resources are complete and informative. 88.06.03 Come Fly with Me---An Invitation to Flight: Its History, Science, Careers and Safety Careers in aviation and the air safety industry are highlighted in this unit. Air travel safety issues are strongly approached. Teachers interested in the study of aviation are encouraged to use this unit with grades K through 5. 88.06.04 The Continuity Equation, the Reynolds Number, the Froude Number Information on aerodynamics and air travel is the main thrust of this unit. This unit provides background information for teachers interested in the study of aerodynamics. The lessons may be too difficult for younger students to complete independently, however the unit is recommended for use by grades 3, 4, and 5. 88.06.08 Highways in the Sky: Flight Control Aviation and its relationship to science is the main focus of this unit. Aviation careers are highlighted and defined for strong background information. Recommended for teachers interested in expanding their knowledge of flight and air travel. This unit is appropriate for grades 2 through 5. 90.07.02 Problem Solving through Aviation Parts of this unit could be adaptable for upper elementary students, grades 4-5. For example, the unit contains a detailed description of various types of airplanes and their components. Hands-on experiments are also included in the lesson plans. 90.07.04 What Makes Airplanes fly…Why me This unit written for upper elementary students includes problem solving for mathematics, journal entries for writing and hands-on activities for science. The main emphasis is to introduce elementary students to the wonders of flight. Lesson plans include many hands-on activities and work sheets. 90.07.07 Up, Up, and Away This interesting unit written for middle and upper elementary students, 3-5, is an introduction to what makes airplanes fly. A discussion of flight is preceded by lessons on gravity, pressure and gases. Reproducible activity sheets are included. 90.07.09 The Science of Flight in Relationship to Birds and Gliders This unit can be adapted for upper elementary students. The first section of the unit takes an in-depth look at birds – how their feathers are formed and the role they play in flight in relationship to the wings. The second section deals with man’s first form of flight – the glider. 80.05.06 Observing City Animals Involves students in the study and observation of city animals: sparrow, starling, pigeon, and gray squirrel. Contains interesting information. Interdisciplinary approach. Adaptable for grades K-5. 95.05.06 Lions and Tigers and Bears...Oh My! A variety of investigations allow young children to understand and become more familiar with the animals of Connecticut and the world. Interdisciplinary approach. Suitable for grades K through 4. 95.05.08 The Animal Kingdom A hands-on approach in this unit helps students better understand the animal kingdom. It is suitable for grade 5 and possibly grade 4. 96.06.03 Asteroids, comets, and Meteorites: Their Intimate Relation with Life on Earth Though designed for grades 9-12, the unit's author believes unit material is adaptable for grades K-12. 96.06.05 Astro-Cosmos The Last Frontier Using an integrated approach with hands-on activities, this unit attempts to guide students toward an understanding of the Universe in general and the Solar System in particular. It is suitable for grades 3-5. 96.06.06 Scaling Down the Universe Through the creation of scale models, students develop an understanding of the Universe's size. Could be used in grades 4-5. 96.06.07 Other Worlds Other Life: Our Solar System and Beyond Moving from the Earth outward, students research information to increase their understanding of the Solar System. A final project and the creation of an "alien" are encouraged. This unit is suitable for grades 2-5. 96.06.08 Our Planet...Our Solar System Designed for grades K-2, This unit attempts to develop an elementary understanding of the solar system in young children. Definitely suitable for K-2, but could be used in grade 3, also. 96.06.09 Time, Distance, and Modern Technology in the Measurement of the Heavens Emphasizing a team approach involving parents and staff, this unit integrates math, social studies, language, and science to develop a clearer understanding of the Solar System. Hands-on activities. Appropriate for grades 3-5 and higher. 96.06.10 The Solar System and Space Technology Though designed for older students, the unit's author believes material is adaptable to elementary grades. There is a strong emphasis on math. 96.06.12 Space: That Vast Frontier Although this unit was designed for middle and high school students, it contains considerable material that could be adapted to upper elementary grades. 98.06.03 The Sun Written primarily for second grade children, the unit can easily be adapted to include grades 1-5. The unit deals with the sun and its effects on our everyday lives. Background information is included on the sun such as distance from earth, temperature, and size. Students engage in activities and experiments to prove the sun’s importance. 98.06.05 Fly Me to the Moon This unit targets grades 3 and 4. It can easily be adapted to include grade 5. The curriculum unit is a study on the moon and intended to be used in conjunction with a study on the solar system. The unit contains a variety of student activities such as developing research skills, reading, writing, observations, and hands-on experiments. 98.06.06 Exploring the Moon – A Curriculum Adapted for Use With the Visually-Impaired This unit written for grades 4-6 uses a multi-sensory approach to help students understand earth’s only natural satellite. The data information provided will teach students that understanding the moon helps us to understand the earth. Lesson plans provide students with activities that will actively involve them in studying our closest neighbor in the solar system. 98.06.08 Beyond Planet Earth This unit was written for high school special needs children. The unit contains a lot of background information on the solar system that could be beneficial to any elementary grade teacher. The unit examines the solar system, which includes the sun, planets, moons, stars and earth. In addition, the unit examines the properties of the sun and the moon; the differences between day and night; and the lunar phases. 80.05.02 The Circulatory System Though aimed at seventh graders, this unit could be adapted to students in grades 4-5. Provides informative, interesting activities on the circulatory system. 80.05.09 A Family Life Science Unit for Early Adolescents: Ages Eleven Through Thirteen This unit presents an overview of early adolescence. Contains a useful bibliography of books and films. Designed for integration with science or social studies curriculum. Some material from this unit is adaptable to grade 5 students in the areas of science and/or social development 81.04.10 Perception and Sense Organs – A Writing Unit for Biology Designed to motivate students to study their own sense organs and perceptions. Capitalizing on this interest, daily writing assignments have been developed. Included is a short research report on animal senses. Parts could be adapted for elementary grade level students, 1-5. 82.07.01 The Cell This unit provides the necessary background information for a discussion of the chemistry of cells and the basics of genetics. Adaptable to grades 4 and 5, this unit concentrates on human biology. 82.07.02 Cell Structure and DNA Discussing the structure of cells and basic genetics as they pertain to human biology and DNA, this unit is a good resource for the fourth and fifth grade science teacher. Although some of the information and lesson plans may not be appropriate to the middle-grades, this unit offers simple investigations that can be completed by the teacher and the students in grades 4 and 5. 82.07.03 Genetics Biology, genetics and the scientific history of humans make this unit a valuable resource to the grade 4 and 5 science teacher. The details of human genetics can be utilized in middle-grade science lessons. Simple experiments can be extracted from those listed in the unit. 82.07.04 Genes…The Nature of Human Development A discussion of human genetics and biology lend this unit as a resource to the science teacher. Sufficient background information and adaptable activities and lessons offer the teachers of grades 4 and 5 a starting point in their research for genetics lessons. 85.07.02 The Oyster The biology of the oyster is covered in this unit. Information pertinent to teachers completing a scientific study of the oyster is available. This is a good resource for teachers covering the oyster industry in New Haven, Connecticut. Can be adapted to grades K-5 either as a resource or as a teaching unit. 85.07.03 The Crusty Fossils A scientific study of crabs is the focus of this unit. The biology of crabs, their habits and habitats are discussed throughout this unit. This unit is a resource for teachers needing background information on specific types of crabs. Recommended for grades K-2 as a teacher resource and for grades 3, 4 and 5 as a teaching unit. 85.07.05 Mathematics and You Using mathematics as a base, the study of bone growth and anatomy are the main issues of this unit. Teachers needing information regarding the skeletal system will find this unit to be an adequate resource. This unit is recommended for grades 3, 4, and 5. 85.07.06 Anatomy and Physiology of the Human Knee Joint The anatomy of the human knee is the specific topic covered in this unit. Although very specific, this unit provides a great deal of information relating to joints, cartilage and the like. The structure of the knee and actions related to the knee are also included. Recommended as a resource to teachers in grades 2 through 5. 87.05.06 The Teaching of Biology and difference to a special Education Seventh Grade Class Main topics in this unit are: 1) The Cell and its Function; 2) Heredity – Inheriting Traits; 3) Animals; 4) Human Development, and; 5) Heredity and Environment. Students will be introduced to the scientific method and the use of the microscope. Parts of this unit could be adapted for elementary classrooms, 1-5. 90.06.02 You are a Unique and Special Person This unit is easily adaptable for middle and upper elementary students, grades 3-5. Lesson plans contain many hands-on activities of interest to elementary students. The unit covers atoms and molecules; the concepts of living vs. non-living basic cell structure; cell division, etc. 98.07.01 Pandemic Pet Population: The Reproductive Responsibility of Pet Owners The unit discusses mammal, bird, fish, and reptile reproduction and how the lack of reproductive control can lead to poor pet health, overcrowding and eventually death. The unit implements various strategies, utilizing hands-on experiences, field trips, guest speakers, and technology. Lesson plans are detailed with exciting activities for grades K-5. Included are reproductive sheets for the classroom teacher. 98.07.02 The Population Explosion: Causes and Consequences The main emphasis of this unit is to examine factors about overpopulation. This unit addresses: (1) the definition of overpopulation, (2) the causes of rapid population growth, (3) the consequences of rapid population growth, and (4) actions and strategies that can be developed to solve problems caused by overpopulation. The unit contains excellent background information as well as lesson plans, teacher resources, student reading list, and a list of speakers and a bibliography. This unit is recommended for grades 5-8. 81.05.08 Energy and the City Person The unit is designed for urban students and follows three themes: 1) You and Energy; 2) Energy and the City, and; 3) Energy and the Future. Along with a historical background, lesson plans and activities could be adapted for middle and upper elementary students, 3-5. 86.06.01 Coal as a Source of Energy This unit contains considerable material that could be adapted to teach elementary students about coal as a source of energy. Contains considerable background material. It is adaptable to grades 3-5. 87.06.07 The Science and Technology of Water The main objective of the unit is to relate science and math into the student’s daily lives. Although advanced in nature, the unit provides excellent background materials on the physical properties of water, the water cycle, and purification systems in New Haven. Parts of the unit could be adapted in a unit on water for middle/upper elementary students, 3-5, utilizing work sheets and experiments. 87.06.08 Let There Be Light The main emphasis of this unit is to provide many hands-on activities and experiences with light. Some of the themes covered in this unit relate to prisms and lenses, the nature of wave motion, the nature of light, the construction and functioning of the eye, and principles of light manipulating devices such as eyeglasses and telescopes. The unit could be adapted for upper elementary students, grade 5. 89.06.03 Children Actively Investigating Rocks and Minerals Although this unit has been written for 3-5 grade level students, it could be easily be adapted for all elementary grade level students. This unit contains many interesting hands-on experiments. Experimenting with crystals, water, minerals, etc. would be of interest to all elementary grade students. 89.06.04 Crystals: What Are They and What Holds Them Together Students participate with hands-on activities in making crystal creations. Background information is available for the teacher. Designed for grades 5-8, the unit is easily adaptable for elementary grade levels 1-4. 89.06.06 Crystals in the World Around Us In this unit, students experiment with crystals to find out about their particular properties. In the hands-on section, students take part in growing their own crystals from a solution. The unit could easily be adapted to include middle and upper elementary students, grades 3-5. 89.06.07 Crystals: More than Meets the Eye Parts of this unit could be adapted for upper elementary students, grade 5, and used as a supplement and guide to the study of crystals and minerals. The unit deals with the structure of crystals, constructing three dimensional paper models, and the actual growing of crystals. Minerals are also studied and discussed. 91.06.05 The Great Continental Drift Mystery This unit contains an abundance of information around the topic of changes in global climate. Parts of the unit could be adapted for upper elementary students, grade 5, especially where the emphasis is on both living and fossil species of plants and animals. 95.05.03 Freshwater Wetlands Designed for high school students, portions of this unit could be adapted to elementary science classes. It includes a definition of wetlands and brief reviews of wetland hydrology, biogeochemistry, adaptations of organisms to wetland environments, and the value of wetlands. Uses Connecticut wetlands to develop understanding. 95.05.04 Geology and Connecticut Though aimed specifically at grade 6 students, this unit contains material on Connecticut's geology that could be modified for fifth grade use. Material applies well to a study of Connecticut. 95.05.07 A Special Relationship: Connecticut and Its Settlers This unit examines the settlement of Connecticut in relationship to each region's physical environment. An interdisciplinary approach with focus on Connecticut history is found. It is suitable for upper elementary grades. 95.05.11 A Comparison Study of Water Habitats for Primary Age Children This unit compares freshwater habitats to saltwater coastal environments, through the study of the wetlands of East Rock Park and the coast of Lighthouse Point Park, both in New Haven. Designed for K-2, but could be adapted for higher elementary grades. 97.06.01 Global Environmental Coastal Changes: Cause and Effect, A "Hands-On " Approach, Primary Style Uses a hands-on approach to allow primary students to understand how certain weather conditions, pollution, and other physical factors affect our earth. Integrated approach. 97.06.02 The Connecticut Watershed and Its Impact on Water Quality in Long Island Sound Various activities allow children to examine the geological formation of Long Island Sound and its watershed systems along with topics related to pollution. Hands-on activities. Integrated approach. Designed for visually impaired. Could be adapted for regular grade 5 students. 97.06.03 Water, Weather, and the World Though designed for low functioning students, this unit that closely examines water and its properties, pollution, and conservation is appropriate for grades K-2. This unit has an integrated approach with hands-on activities. 97.06.05 Students' Response to Global Changes This unit examines natural phenomena occurring on Earth. Though it is aimed at middle school, upper elementary students could use much of the material. Interdisciplinary approach. It is adaptable to grades 3-5. 97.06.06 Our Ocean: How It Works Dealing with oceans in general and some specifics about Long Island Sound, this unit provides students with an understanding of how oceans work. It is a hands-on and interdisciplinary approach. It is suitable for grades 3-5. 97.06.07 The Ocean: A Watery World This unit provides young students with a basic understanding and appreciation of the world of water. Interdisciplinary approach. Activities for younger students in grades K-1. 81.05.01 An Electrical Consumer’s Survival Plan Primary objectives for this unit are: 1) Understanding of a kilowatt-hour; 2) Learning how electric consumption is measured; 3) Learning how to read and record meter readings; 4) To monitor daily electric consumption in the home; 5) Identify electric energy users, and; 6) Understand and calculate an electric bill. Students need a background in the theory of an atom in relation to the three states of matter. Could be adapted for upper elementary students, grade 5. 89.07.01 Teaching Electricity to Middle School Students This unit can be taught to upper elementary students, grade 5, as an introduction to electricity. It develops out of the vocabulary and experiences that children have had and uses analogies, demonstrations and experiments to teach concepts and the design of simple series and parallel circuits. 89.07.02 Changes in Lifestyle Due to Electricity Designed for middle school students, this unit could be adapted to include upper elementary students, grades 4-5. The unit includes personalities responsible for the discovery of electricity, a source of energy, conservation, and safety rules. 89.07.03 Electricity This unit is designed to give the students some background information on electricity through history and the application of electrical technologies. The lesson plans are geared to provide students with hands-on experiences through experiments, speakers, and field trips. It is easily adaptable for middle and upper elementary students, grades 4-5. 89.07.04 Operating Kitchen Equipment Although designed for the eighth grade student, all elementary grade level students would find this unit interesting. In addition, students learn what electricity is and how it works. The unit provides experiments, recipes, and games. 99.07.02 Introduction to Magnetism and Basic Electronics This unit is designed for K-1 students but can easily be adapted to include grades 2-5. The initial focus of the unit will be on magnetic properties and static electricity. Students will participate in a wide variety of interesting experiments to learn about magnetism and the properties of static electricity. Once the class has a firm handle on those concepts, the focus will then move to electronics. To culminate this study, students work with a simple tape recorder. First sounds are recorded, played, erased and recorded on a tape in a functioning tape player. Then using real tools, students have the opportunity to disassemble a nonfunctioning tape recorder. Once disassembled, children will continue exploration, experimentation and assessment of the tape player’s parts and mechanisms. 99.07.03 Modern Electronic Inventions: Changing the Way People Live Like real scientists, students keep a journal of the experiments and/or demonstrations they do. Students work with magnets, static electricity, build an electromagnet, make a battery, and construct a dynamo. Students are required to prepare a final project, which can be a demonstration, experiment, poster, diorama, etc. The unit is recommended for students in grades 2-5. 99.07.04 Technology at Home: An Increase in the Quality of Living Due to Electronic Inventions The unit is designed for middle grade students but can be adapted for grades 4-5. The unit explores electronic inventions that are directly related to the evolving of Technology At Home throughout the 20th century. The study looks at household gadgets that use electricity that students feel they can’t live without, such as: the telephone, refrigerator, television, stove, microwave, stereo, tape recorder, radio, vacuum cleaner, washer and dryer. The unit also examines life before electricity. 80.05.05 The Energy Crisis This unit examines the causes and effects of the energy crisis. Some attention is given to possible solutions. It is suitable for grade 5. 80.05.10 Pollination Ecology in the Classroom Though written for older students, this unit, which introduces pupils to flowering plants and their pollinators, could easily be adapted to upper elementary grades 3-5. 81.05.03 Water, Promises and Problems The bulk of this curriculum unit discusses two central issues: The quantity and quality of our water supply. Within these two themes the unit touches on the geographic problems in the United States, toxic waste, acrid rain, and the thermal pollution. The unit contains great background information for classroom teachers. The unit could be applied to science instruction of all grade levels, K-5. 81.05.04 A Tree is More Than a Street Name The main theme of this unit is to help students be aware of trees as a great American natural resource. It provides a mixing of history, science and value concerns as related to the forest. The unit contains great resource material for the classroom teacher. Could be adapted for any grade level, K-5. 81.05.05 The Hazardous Waste Dilemma This unit presents an overall picture describing the characteristic of a hazardous waste site, its health problems, methods of dealing with wastes, and the role of environmental groups. There is a clear and concise use of classroom activities, as well as, background information for the teacher. Could be adapted for grade levels 1-5. 81.05.07 Solar Energy The unit seeks to define solar energy, explain advantages and disadvantages of solar energy, and realize the potential for careers in the solar energy field. The unit includes activities on solar energy, names of organizations and speakers for further information and experiments to test the effectiveness of solar energy. A great unit, adaptable for upper elementary students, grades 4-5. 81.05.09 Energy Alternatives The purpose of this unit is to introduce all aspects of energy use in the United States with an objective of making students educated consumers, and intelligent voters on energy issues. Along with a background history of energy use, the unit provides classroom activities for understanding energy and its use. A great unit that is adaptable for middle and upper elementary students, 3-5. 81.05.10 Creating Our Energy Future There are three main themes integrated into this unit. They are: 1) The natural environment and ecosystems concept; 2) Five important areas needed for understanding the energy issue, and 3) An exploration of different possible energy futures. The unit has great background information for the classroom teacher. Could be adapted for elementary grade levels, 1-5. 84.06.01 Geology of Connecticut Soil, Rocks and Minerals This unit presents an interesting history of Connecticut landforms, rocks, minerals, and shows how these were formed and how they came about. Many hands-on experiences with geological materials are included with the unit. Parts could be adapted and used in the lower, middle, and upper elementary grade levels, 1-5. 84.06.02 The Geologic History of the Earth and Connecticut: Effects it had on our State The unit provides a discussion of geologic history while using this information to discuss the geologic history of Connecticut. Examples of topics are: volcanoes, glaciers, rocks, the Farmington Canal, etc. Although designed for middle and high school students, parts could be adapted for upper elementary grades 4-5. 84.06.04 The Marsh Land as a Changing Environment This unit exposes students to the marshlands surrounding New Haven. Areas such as Lighthouse Point Park and the mouth of New Haven Harbor with Morris Cove forming the northern boundary and Long Island Sound forming the southern boundary will be used to study salt water plans, and animal life that inhabit these areas. Parts can be adapted f or middle and upper elementary students, 3-5. Contains beautiful resource material about the marshlands. 84.06.06 The Geology of West River The main focus of this unit is on water in general by studying the hydrologic cycle, maps, and waterpower. The main study focuses on Connecticut’s Mill River Basin. The unit presents an interesting study of this area with hands-on experiences that could be adapted for upper elementary students, grades 4-5. 84.06.07 An Introduction to the Marine Environment and Geology of City Point This unit uses a hands-on approach to scientific research utilizing the City Point area as a classroom. Field trips and specimen collecting are important along with related lesson plans. Many parts would be of high interest to middle and upper elementary students, grades 3-5. 84.06.08 The Ground We Walk On This unit and its study are based on four sites surrounding New Haven’s harbor. They are: 1) Lighthouse Point – Erosion; 2) Forbes Bluff – Bedrock and the Rock Cycle; 3) Morris Cove – Coastal Processes, and 4) Return to Lighthouse – Glaciation. Activities in the unit could be adapted for middle and upper elementary students, 4-5, include photography, collecting samples and on-site experiments. 84.06.11 Know Your Watershed This unit teaches students the importance of knowing their watershed area. It contains many suggestions for hands-on activities that can easily be adapted for middle and upper elementary students, grades 3-5. 85.07.09 Inland Wetlands The inland wetlands of Connecticut are the main focus of this teaching unit. Although the majority of the information present in this unit is concentrated on specific Connecticut wetlands, the general information on wetlands is helpful to those not interested in such a narrow topic. This unit is a great resource for teachers in grades 2 through 5. 87.05.03 The Effects of Institutions on Human Behavior The purpose of the unit is to investigate the effects of institutions on human behavior. It explores various niches that are encountered as man exists in the ecosystem and looks at the effects of heredity and the environment on human behavior. There are absolutely beautiful activities for the classroom and suggested field trips adaptable for any grade level. 91.06.01 Weather, Climate and Environmental Change Recommended for grade 8, but could be adapted for upper elementary grades, 4-5. The unit introduces students to the seven basic weather elements. In addition, students study the three major climate regions. Lesson plans include hands-on activities for student participation. 91.06.04 Earth’s Changing Atmosphere The purpose of this unit is to investigate the Earth’s changing atmosphere. The unit contains lesson plans with laboratory exercises, resources for both teachers and students, and suggested field trips. This unit is adaptable of upper elementary students, grades 4-5. 95.05.04 Geology and Connecticut Though aimed specifically at grade 6, this unit contains some material that could be modified for fifth grade use. Applies to a study of Connecticut. 95.05.05 Saving Energy Makes Cents Unit focuses on energy conservation as it relates to a particular school. This unit has a hands-on and interdisciplinary approach. Can be modified for other schools and any elementary grade level. 95.05.07 A Special Relationship: Connecticut and Its Settlers This unit examines the settlement of Connecticut in relationship to each regions physical environment. It is interdisciplinary with a focus on Connecticut History. It is suitable for grades 4-5. 95.05.11 A Comparison Study of Water Habitats for Primary Age Children This unit compares freshwater habitats to saltwater coastal environments through the study of the wetlands of East Rock Park and the coast of Lighthouse Point Park. Interdisciplinary approach. Suitable for grades K-2. 96.02.01 Environmental Racism and the Urban School Child This unit explores how many minority families live in environmental areas hazardous to their health. Contains a variety of activities for early elementary grades. Health related. Suitable for grades K-5. 96.02.02 The Earth and Me: Forever Friends Through a study of the interdependence within an ecosystem, students will be motivated to play a small, yet important part, in solving environmental problems. Contains a variety of interesting activities. Designed for kindergarten students, unit could easily be adapted for other elementary grades. 96.02.05 Environmental Health Hazards and Children This unit introduces upper elementary students to the area of ecology and its relationship to them. Includes writing, researching, and hands-on experiments. Could be used in health class. Grades 3-5. 96.02.06 Food Pesticides and Their Risks to Children Designed for fourth and fifth grade students, this unit explores the use of chemical pesticides and their related risks. Could easily fit into curriculum of health class. 97.06.02 The Connecticut Watershed and Its Impact on Water Various activities allow children to examine the geological formation of Long Island Sound and its watershed systems, along with topics related to pollution. Hands-on activities. Integrated approach. Designed for visually impaired. It is suitable for Grade 5. 97.07.03 You Can Change the World After studying the physical environment around their school, kindergarten students alter the existing environment through cleanup efforts and organic community gardening. Interdisciplinary approach. Suitable for grades K-5. 97.07.04 The Greening of Mars: The Changes Necessary to Sustain Life on Mars After studying and becoming aware of environmental problems existing on Earth and their possible solution, students will develop plausible mechanisms to change Mars to a planet humans could inhabit. Interdisciplinary approach. Close relationship to astronomy. It is suitable for grades 4-5. 97.07.05 Lead Contamination in Our Environment This unit attempts to provide information, awareness, and activities relative to the seriousness of problems related to lead toxins in the water. It is adaptable to grade 5. 97.07.07 From the Farm to Your Table: Where Does Our Food Come From This unit helps students understand our dependence on the land for survival. Studies food supply, pesticides, and additives. Attempts to make students wise consumers. Interdisciplinary approach. It is suitable for grades 3-5. 97.07.09 Nutrition: It's in Your Hands This unit covers land, air, and water pollution and the movement of contaminants through the food chain. Attempts to foster healthy nutrition. Health related. It is suitable for grades 3-5. 97.07.10 The Environment Around Me This unit is an interdisciplinary approach to learning about the environment. Written with Connecticut Mastery Test in mind. It is adaptable for grades 4-5. 97.07.11 New Haven: Your Coastal Community Explores the New Haven ecosystem including Long Island Sound and its estuaries. Hands-on and cooperative research projects are included. Relates to a study of New Haven. Interdisciplinary approach. It is adaptable to grade 5. 99.06.03 Making Wise Environmental Decisions Written to familiarize middle grade students with environmental concerns with the modern world, this unit could be adapted to include fifth grade students. The unit is a survey course on solving ecological problems. It introduces students to the basic ecological terminology and explores the difficulties encountered when making decisions about the environment. The unit carefully examines the three main areas of pollution: land, air and water. The unit’s interdisciplinary approach presents case studies for students to make decisions concerning what to do about a specific problem and decide where the monies will come from and how much will be spent to correct the problem. 99.06.07 Are you Balanced with your Environment? This unit is designed to assist teachers in grades K-4 to develop an awareness of ecological principles and basic concepts of environmental science. The use of critical thinking skills is encouraged as students are guided to analyze problems and suggest solutions. The unit also provides suggestions for discussion questions to initiate class participation in the study of environmental issues with simple activities divided according to student’s grade level. The unit presents a great idea for making bark rubbings of different trees for first graders. 84.05.04 Teenage Diet Although planned for the teenage adolescent, students in all grade levels, K-5 can profit by the sound nutritional habits looked at in this unit. Fast and convenient food consumption, diets high in sugar, salt and fat, obesity, the practice of not eating breakfast are just a few of the issues addressed in the unit. 84.05.05 Eating Disorders and Adolescents Although the stories given are about older teenagers, the unit contains helpful resource material for teachers of upper elementary students, grades 4-5, in identifying students with eating disorders and probable causes. 85.08.08 Statistical Drug Abuse and Adolescents in the U.S.A. This unit looks at drug abuse in American adolescents. Various drugs used by adolescents and their effects are discussed. This unit is a good resource for health and social development issues in the fourth and fifth grade classroom. 87.06.04 The Science and Technology of Food: The Food Unit This unit’s primary purpose is to help students become better food consumers by developing an understanding of the world food situation. Parts of the unit could be adapted for all elementary grade levels, K-5. The unit encompasses food production, as well as, good nutrition. 91.05.03 A Sense of Wonder This unit focuses on helping students improve their social behavior by improving their self-control. It also stresses issues around good nutrition and hygiene. The unit is adaptable for middle/upper elementary grades, 3-5. 91.05.05 Adolescent Obesity This unit is adaptable for middle and upper elementary grades, 3-5. The first section deals with psychological factors influencing obesity. The second section addresses social and health factors. 96.02.03 Nicotine Addiction to Disease: Growing Up with the Tobacco Industry Unit attempts to empower young children with knowledge about tobacco and smoking related to illnesses. Integrated approach. It is suitable for grades 4-5. This unit is excellent for health classes. 87.05.05 Genetics and Heredity The unit is designed to provide science students with a basic knowledge of genetics and to encourage the investigation of genes and the transmission of characteristics to succeeding generations in humans. Parts of the unit could be extracted for middle/upper elementary students, particularly the section on the cell. Children would enjoy making slides showing cells from various materials from our environment. 90.06.04 Heredity and Environment Upper elementary grade students, grades 4-5, would profit by this unit with hands-on activities included in the lesson plans. The main purpose of the unit is to help students learn more abut themselves. The unit includes a discussion of chromosomes and genes, DNA, the genetic code, heredity and environment, a study of identical twins, and some genetic disorders. 90.06.07 Heredity: Your Connection to the Past This seven-week unit can be adapted for upper elementary students, grade 5, in a science and mathematics course. The main discussion centers on purebred dominant traits, recessive traits, in heritance, generations, and any other traits, which would lend themselves to problem solving using scientific methods in a mathematical process. Lesson plans contain an abundance of activities that can be adapted to other disciplines of the curriculum. 96.05.02 Basic Genetics in First and Second Grade. This unit helps make young students become aware of the role genetics may play in their lives. Uses an integrated approach. Could be adapted to most elementary grades. 96.05.04 I Wear My Genes Inside Out: The Genetic Characteristics of Animals This unit uses the study of animals to introduce the basic concepts of genetics to young elementary students. Integrated with math and science. Suitable for grades K-5. 96.05.07 Who Am I and Why? This unit integrates science-based activities with social development in order to develop an understanding of genetics. It is designed for a third grade but adaptable to most elementary classrooms. 96.05.09 My Family and Me: Our Similarities and Differences This unit designed for special education students with severe delays, contains many activities that are appropriate for teaching elementary students about heredity. The unit features a hands-on approach. 96.05.10 Where Did That Curly Hair Come From? This unit uses an integrated approach involving language arts, reading, and social studies to teach elementary students about heredity. Suitable for grades K-5. 80.05.04 A Creative Classroom: Model for a Sixth Grade Science Class Author presents a plan for developing a model science classroom. Some upper elementary classroom teachers might want to adapt some of the author’s suggestions. 82.04.04 Seascapes---Beginning Exploration The scientific and social relationship between man and the ocean is the focus of this unit. Oceanography and literature pertaining to the sea are integrated to provide the students with an educated background that could lead to a follow-up unit in ecology/environment or even health. Recommended as a resource for teachers in grades 3, 4, and 5. 87.06.02 Transportation Parts of this unit could be adapted for upper elementary students, grades 4-5. The unit ties in nicely with School to Career, introducing students to various job opportunities within the automotive industry. Parts of the unit require background knowledge of chemistry. 87.06.03 Space Shuttle Science Math skills are too advanced for elementary students. Interesting material related to space life aboard a space shuttle that would be of value to all elementary students, K-5. Contains an interesting "Shuttle Food and Beverage List." 87.06.06 How To Dye Cloth This unit gives an overall picture of the history, types and application of dyes. Parts could be adapted for elementary students, especially stenciling and tie-dyeing.
Sidewalk Math Carpets engage young children ages 3 to 7 in learning mathematical patterns by walking, hopping, jumping, and skipping through colorful designs on carpets that are similar to hopscotch patterns drawn on sidewalks. However, Sidewalk Math patterns are designed with special attention to the skills that build the mathematical numbersense that students will need to succeed in school. Counting Things = object counting Counting By = multiplication facts Counting to Order = ordering skills Counting On = addition skills Check out Sidewalk Math videos on Lesley University’s website! Numbersense, the ability to count and to see numbers and shapes as patterns of numbers, is at the very foundation of mathematics. Sidewalk Math patterns were designed in the Creativity Commons of Lesley University by faculty and educators in Early Childhood Education, Mathematics, Art & Design and Creative Arts in Learning. Each Sidewalk Math Carpet provides a unique mathematical pattern with counting activities that are developmentally appropriate, beautifully designed and mathematically meaningful. Research demonstrates the impact of Sidewalk Math Carpets on increased growth in numbersense in young children ages 3years to 7 years: To guide teachers and parents in the use of Sidewalk Math patterns Lesley University faculty created The Footbook: Steps to Developing Numbersense in Young Children. The Footbook provides information on the importance of math literacy, ways to engage young children in counting activities, and additional resources (books, songs, objects, and chalk activities) to help teachers and parents count with children. The Creativity Commons faculty also provide professional development in schools and districts to support early childhood educators in developing guided activities that align with their curriculum and meet the needs of all learners, particularly children with learning differences or those who are English language learners. For more information, contact Creativity Commons Director Martha McKenna at email@example.com. Young children learn through play. Research shows that the learning is more likely to stick when children are intrigued and fully engaged — body, mind, and imagination. Sidewalk Math Carpets meet these criteria. Sidewalk Math Carpets support learning: Patterns encourage children to practice counting in different ways: forward and backward, counting by, counting on, building the foundations of number sense and operations Some patterns encourage exploration of color and shape, encouraging not only counting of shapes, sides, and corners, but experimentation with different possibilities for making shapes and patterns, engaging children in thinking flexibly and mastering ideas such as equivalence and regrouping Some patterns help children link size, quantity, and number The carpets encourage children to practice—just because it’s fun—until key sequences and patterns become automatic As play mats, the carpets invite games and conversations, building language and supporting social development and creativity. Dr. Betty Bardige, co-author of Building Literacy with Love Uses of Sidewalk Math Carpets in the early childhood classroom include: Creating an indoor space for active play in the classroom Engaging children in demonstrating their counting skills and invite others to join their play Supporting children in learning each other’s languages as they count in different ways Building gross motor skills of children by jumping, tiptoeing, walking backwards, and hopping on patterns Build fine motor skills of children drawing, coloring, and cutting out shapes to make their own patterns Sparking creativity with songs, games, creating art, pretending and developing new mathematical patterns Students taking initiative for their own learning.
To what extent was the league The league had no means of enforcing its decisions other than the effect of world opinion if a power chose to be defiant, there was nothing effective that the league could do. The league of nations, established in 1921, was the brainchild of thomas woodrow wilson, president of the united states during world war 1 the idea was conceived during the advent of the great war, and aimed to stop war through working together, imp. The league of nations was a intergovernmental organization founded in 1919 as a result of the paris peace conference that ended the first world war with 3 main . Dates and factsformatting for questions where the answer is a dateafter the question there will be a bracket displaying how to answer the question for example . How far was the league of nations successful in the 1920s posted on march 19, 2012 by ddietrich two contributions have been made for this post by senior 2 students. - the league of nations was not able to impose economic sanctions over japan, because usa who was not in the league would continue trading with them and none of the members of the league wanted to stop trading because it would worsen their economy. Did the league of nations fail or was it successful why did the league of nations fail in the 1920s to what extent was the failure of the league of nations in promoting collective security mainly due to the lack of usa as a member. Kent bueche 7 december 10th, 2009 league of nations paper [q13] / hinze doomed to fail: the league of nations to what extent is it true to say that the league of nations failed (a) because of its idealistic origins (b) in spite of its idealistic origins. Study to what extent was the league of nations a success flashcards from sam pullan's class online, or in brainscape's iphone or android app learn faster with spaced repetition. The league of nations spectacular failure in keeping the peace following the first world war has been associated with many different factors over the years but was it the league itself and it's basis of collective security that was flawed. Chapter 2 - introduction to the big ideas to the chapter please visit the cambridge international examinations site: . View essay - to what extent was the league of nations doomed to fail from social studies 009889 at curie metropolitan high school to what extent was the league of nations doomed to fail. - success of the league in the 1920's to determine whether the league of nations was a success we need to know what it aimed to achieve and to what extent were they achieved. The league of nations (abbreviated as ln in english, la société des nations at its greatest extent from 28 september 1934 to 23 february 1935, it had 58 members. The delian league (or athenian league) was an alliance of greek city-states led by athens and formed in 478 bce to liberate eastern greek cities from persian. Take a look at our interactive learning mind map about to what extent was the league of nations a success, or create your own mind map using our free cloud based mind map maker and mobile apps. The league of nations was the first attempt at something like the united nations that we now have it was formed after wwi it was meant to keep the peace in the world in two ways first, it would . League of nations failed because of following reasons: america – refused to join it: america not joining it was a big blow to the league as america was the world’s most powerful nation germany was not allowed to join the league in 1919. To what extent was the league League of nations failures while the league of nations could celebrate its successes, the league had every reason to examine its failures and where it went wron. About coast redwoods the tallest trees in the world these findings will help focus league efforts on where to protect and restore redwood forestland according to . Northern europe in the 1400s, showing the extent of the hanseatic league (hanse or hansa). - To what extent was the league of nations a success in 1914 war broke out in europe the war ended in 1918 and germany solely blamed the end of the war was signed with the treaty of versailles. - To what extent was the league of nations a success what were the aims of the league permanent court of international justice created to settle disputes between countries. How successful was the league in the 1920s on the one hand it is clear that the league succeeded in encouraging cooperation as can be seen in the agencies. Created for eal students covering the igcse history (0470) syllabus at the ages 12-13 covering: the structure of the league strengths and weaknesses of the league how successful was the league in the 1920s humanitarian successes the depression . Free essay: the failure of the league of nations and the outbreak of war in 1939 there are many causes for the outbreak of the second world war these. Focus points how successful was the league in the 1920s how far did weaknesses in the league’s organization make failure inevitable how far did the depression make the work of the league more difficult.
Cucumber (Cucumis sativus) is a tender, warm-season vegetable that thrives when temperatures are between 65 and 75 degrees Fahrenheit. Cucumbers require substantial space but can be grown in small gardens because they are vining plants and you can train them to grow vertically. These versatile, easy-to-grow vegetables can sometimes acquire disease through aphid infestation, which can cause your cucumbers to become mottled and stunted. Aphids are not difficult to control if you spot them early enough, so monitor your cucumber plants for signs of these tiny insects. Aphids are tiny insects with long, slender mouth parts used to pierce stems and leaves of plants to suck out the plant’s fluids. Aphids range in size from 1/16 to 1/8 inch long. These insects can be almost any color, including green, black, brown, red or pink. Aphids have two tubes near the end of the abdomen, and slender antennae that protrude from the head. They can be winged or wingless. Aphids tend to collect along the side of your garden that is most exposed to wind, so check these areas carefully. Also check the undersides of leaves. Indirect evidence of aphids includes the presence of natural enemies such as ladybugs and lacewing flies. Ants feed on the excreted sap generated by aphids, so if you see ants near your cucumber plants it could indicate an aphid infestation. While small populations of aphids will not directly damage your cucumbers, aphids transmit several forms of mosaic virus that can destroy the plants. If mosaic virus is present, your cucumbers will be mottled with yellow or light green spots, the leaves will curl, the vines will weaken, and the plants will be stunted. Cucumbers will also be small, misshapen and develop knobs and warts. They will not be edible. Additionally, transmission does not require a large aphid population, so identifying the disease quickly is key to controlling it. A forceful spray of water will knock off any aphids that are present, but be careful of further damage to vines that may be weakened by mosaic virus. You can also apply neem oil or insecticidal soap to kill the aphids. Spray directly on the aphids, making sure to spray to the undersides of leaves. These methods only remove the aphids present and you may have to repeat them over the course of several days. Not all insecticides are safe to use on cucumbers or other food plants. Diazinon or carbaryl are safe for cucumber plants, but use care on young plants, which are tender. Remove and destroy diseased plants as soon as mosaic virus appears. After handling diseased plants, wash your hands with detergent and water. To prevent aphids and the viruses they transmit, make sure you purchase high-quality seeds. Do not plant cucumbers near woods or weedy areas. Practice diligent weed control, as aphids like to overwinter in weeds. If you have a large enough garden, plant a row of corn on the windward side of the cucumbers. Introduce natural enemies of aphids in your garden. Certain species of wasps, ladybugs and lacewings feed on aphids. Never use more nitrogen fertilizer than necessary; high levels of nitrogen fertilizer promote aphid reproduction. - Ohio State University Extension Fact Sheet: Growing Cucumbers in the Home Garden - United States Department of Agriculture: Plants Profile: Cucumis sativus L. - University of California IPM Online: Pests in Gardens and Landscapes: Aphids - IPM North Carolina State University: Know and Control Cucumber Pests - North Dakota State University Ag Department: Disease Management in Homegrown Cucumbers, Melons and Squash - Jupiterimages/Creatas/Getty Images
This paper contributes longitudinal research evidence on the impact of structural inequalities on children’s development within households and communities, the ways access to health, education and other key services may reduce or amplify inequalities, and the ways that children’s developmental trajectories diverge from early in life through to early adulthood. Our starting point is a series of key questions about how inequalities develop through the life-course: - What are the main features of children’s physical, cognitive, psychosocial developmental trajectories, and how do these domains interact in shaping children’s outcomes and well-being? - What are the most significant factors that shape these trajectories? By extension, what might support better child development, promote resilience or help children who have fallen behind? - What role does the timing of events, influences and institutions play in shaping children’s outcomes? A few initial examples highlight the ways Young Lives is contributing evidence on developing inequalities: - By the age of 8, almost all Young Lives children in Ethiopia from the poorest third of households had some level of difficulty in reading in their mother tongue (94%), compared with just under half of those children from the least poor third of households (45%). - By the age of 12, the stunting rate of the poorest third of children in the Peru sample was four times greater than the stunting rate of the least poor children (37% compared with 9%). - By the age of 15, the school enrolment rate of the least poor third of Young Lives children in Vietnam was 40% higher than that of the poorest third of children (89% compared with 62%). - By the age of 19, young women in the rural sample in Andhra Pradesh and Telangana were more than twice as likely to have had a child than young women in the urban sample (24% compared with 11%). These examples draw attention to emerging differences between groups of children related to their ethnicity, poverty, gender, living conditions, schooling and other circumstances. Importantly, they also draw attention to overall levels of development that are lower than expected norms for whole populations. Research typically identifies differences at specific age points, notably via cross-sectional designs. The advantage of a longitudinal study is in revealing a complex, dynamic, multi-dimensional story spanning the life-course. Life-course perspectives contribute to understanding more about the history and timing of influences on children’s experiences, opportunities and outcomes, including which events and interventions have the greatest impact on children’s development and well-being, all of which is relevant to designing policies and planning interventions. The presents findings in three areas which are core to Young Lives research: - Tracing children’s developmental trajectories, examining physical, cognitive and psychosocial development, as well as the links between these domains. - Examining the changing household contexts in which children are growing up which shapes and filters children’s developmental trajectories. - Tracing how children transition through school and their engagement with wider social processes as they move through later childhood.
Scientists have discovered a bacterium that could reduce the use of fertiliser in sugarcane production and improve yield. Sugar is an important commodity around the world and sugarcane accounts for about 80% of production. The price of sugar has increased at a rate considerably above inflation over the last 30 years. This is not least due to the rising cost of fertilisers, which is partly driven by increased global demand, and linked to the degradation of soil quality over decades of agricultural use. Of course, with increasing pressure on water, energy and other resources, there are multiple other reasons to reduce the use of synthetic chemicals in agriculture wherever possible. This research, published today (19 December) in SfAM's journal, Microbial Biotechnology, describes how scientists searched the roots of sugar cane and found a new bacterium, Burkholderia australis, that promotes plant growth through a process called nitrogen fixation. Bacteria are widely used in sugar cane production, as well as with other crops, where they help to break down organic matter in the soil to make vital nutrients available to the growing plants or turn nitrogen from the air into nitrogen compounds that are essential for growth (so-called biological nitrogen fixation). The results can be very variable, which is unsurprising given the complexity of biological processes in and around the plant root. This variability means that the success of bacterial fertilisers might depend on developing tailor-made versions for different crop cultivars and environments. Lead researcher, Dr Chanyarat Paungfoo-Lonhienne from The University of Queensland, Australia, said "We took a new approach and went looking for bacteria that were present in large numbers around the roots of thriving sugar cane plants. While two of the most abundant bacteria did not have noticeable effects on plant growth, Burkholderia australis was doing quite well in competition with other soil bacteria in the environment, and turned out to be particularly good for the plants." The team tested the bacteria, checking that they were happy living amongst the roots of growing sugarcane seedlings, and sequencing the genome to confirm that they had the genetic ability to turn nitrogen into plant food. Paungfoo-Lonhienne and colleagues are also looking for bacteria that break down waste produces from sugar cane processing, or livestock manures, to provide better natural fertiliser for next generation crop production. They hope to conduct field tests with a view to assisting the development of commercial products that will be used to improve the health and productivity of sugarcane crops, whilst reducing the need for synthetic fertilisers. |Contact: Nancy W Mendoza|
Definition: There are several Theories of Motivation that are developed to explain the concept of “Motivation”. The motivation is a drive that forces an individual to work in a certain way. It is the energy that pushes us to work hard to accomplish the goals, even if the conditions are not going our way. With the establishment of human organizations, people tried to find out the answer to, what motivates an employee in the organization the most. This gave birth to several content theories and process theories of motivation. The content theories deal with “what” motivates people, whereas the process theories deal with, “How” motivation occurs. Thus, theories of motivation can be broadly classified as: Content Theories: The content theories find the answer to what motivates an individual and is concerned with individual needs and wants. Following theorists have given their theories of motivation in content perspective: - Maslow’s need Hierarchy - Herzberg’s Motivation-Hygiene Theory - McClelland’s Needs Theory - Alderfer’s ERG Theory Process Theories: The process theories deal with “How” the motivation occurs, i.e. the process of motivation and following theories were given in this context: - Vroom’s Expectancy Theory - Adam’s Equity Theory - Reinforcement Theory - Carrot and Stick Approach to Motivation Thus, these theories posit that how an individual gets motivated to perform the task and what are the factors that contribute towards the motivation.
Hospitals can efficiently protect public health by lessening the quantity and toxicity of the wastes they generate, and also by employing a variety of ecologically sound waste management and disposal alternatives. As part of a healthcare program, they must not only focus on treating the patients inside the hospital, but protecting citizens as well outside from harmful waste materials. All over the world, health care waste management is underfunded and inadequately executed. The merging of contagious and other dangerous properties of medical waste symbolize a significant environmental and public overall health threat. It is indeed frightening to think that chemicals and other toxic substances may reach our neighborhood. A current literature review came to the conclusion that over half the world’s population is in danger from illness caused by healthcare waste. It was also found that plenty of inadequate waste treatment methods cause violation of fundamental human rights. There is certainly no international convention that directly addresses medical waste management, so classification systems vary from country to country. Nevertheless, waste is frequently categorized based on the risk it carries. The vast majority of medical waste (around 75-85%) is comparable to normal municipal waste and also considered as low risk, unless burnt. The rest consists of more harmful forms of medical wastes, which include infectious and sharp waste, chemical and radioactive waste and hospital waste water. Burning medical waste products creates numerous hazardous fumes and compounds, such as hydrochloric acid, dioxins and furans, as well as the toxic metal lead, cadmium, and mercury. The disposal of biodegradable waste produces greenhouse gas pollutants, including methane, which can be twenty-one times stronger than carbon dioxide. The government, as well as international organizations, must have a strong stand in managing hospital waste materials. This is to improve the quality of healthcare and avoiding possible spread of diseases in the community.
Birds Evolved from Maniraptoran Dinosaurs: Study Researchers from McGill University say that birds have evolved from a group of small, meat-eating theropod dinosaurs called maniraptorans that existed about 150 million years ago. Many maniraptorans were bird-like as they had small body sizes along with hollow bones, feathers, and high metabolic rates, according to the findings of the researchers. The research was led by Professor Hans Larsson and a former graduate student, Alexander Dececchi from McGill university. The researchers examined fossil data from the period marking the origin of birds. In a study published in the September issue of Evolution, the researchers write that the lengths of the limb showed a comparatively constant scaling relationship to body size through the history of carnivorous dinosaurs. "Our findings suggest that birds underwent an abrupt change in their developmental mechanisms, such that their forelimbs and hind limbs became subject to different length controls," Larsson, Canada Research Chair in Macroevolution at McGill's Redpath Museum, said in a press release. "This decoupling may be fundamental to the success of birds, the most diverse class of land vertebrates on Earth today," Larsson added. Also a 5000-fold difference in the mass between Tyrannosaurus rex and the smallest feathered theropods from China was found. This limb scaling occurred at the origin of birds, when both the forelimbs and hind limbs went through a dramatic separation from the body size. This transformation may have been crucial in allowing early birds to evolve flight, and then to exploit the forest cover, the authors conclude. "The origin of birds and powered flight is a classic major evolutionary transition," said Dececchi, now a postdoctoral researcher at the University of South Dakota. "Our findings suggest that the limb lengths of birds had to be dissociated from general body size before they could radiate so successfully. It may be that this fact is what allowed them to become more than just another lineage of maniraptorans and led them to expand to the wide range of limb shapes and sizes present in today's birds." The elongation of the forelimbs, made them long enough to serve as an airfoil, allowing for the evolution of powered flight. Flight control became more improved and efficient in early birds after the hind limbs shrunk. "This work, coupled with our previous findings that the ancestors of birds were not tree dwellers, does much to illuminate the ecology of bird antecedents." said Dr. Dececchi. "Knowing where birds came from, and how they got to where they are now, is crucial for understanding how the modern world came to look the way it is." Another group of flying reptiles called the pterosaurs existed along with the early birds and dominated the skies and competed for food. The scientists believe that if the birds had shorter legs it would have helped them fly better by reducing the drag during the flight and it would also have played a crucial role in their survival. Modern birds tuck their legs as they fly, even while moving and perching.
Liquid Water in the Martian North? Maybe. Perchlorate. Never heard of it? Join the club. But NASA’s Phoenix spacecraft has found it in the soil in the icy northern plains of Mars. And now that it’s been found, scientists are scrambling to explain how it got there, and what, if anything, its presence means about the habitability of the martian north. Phoenix didn’t go to Mars to find perchlorate. It went looking for evidence of liquid water. From orbit, NASA’s Mars Odyssey in 2002 discovered water ice in the martian north, lying just inches beneath the surface. Very cold, very hard ice. Far too cold to support life. But Mars’s polar regions aren’t always so cold. The angle at which Mars tilts changes over time, and every hundred thousand years or so the planet leans so far over that its north and south poles take turns facing the sun as the planet travels through its orbit. When this happens, the polar regions get increased sunlight, and some of the subsurface ice may melt, and leave behind telltale mineral signs in the martian soil. Those signs are what Phoenix is looking for. NASA’s MER rovers have both found evidence, at sites near the martian equator, of rocks that were altered by the action of liquid water. But most scientists agree that those alterations occurred quite early in Mars’s history, perhaps as long ago as 4 billion years. In the northern plains, where subsurface ice is prevalent, liquid water may have been around more recently. As recently as the last time Mars wobbled over onto its side. Has Phoenix found evidence of liquid water? The jury is still out. But it has found perchlorates. Perchlorate is a chemical compound, a negatively charged ion, that contains a single chlorine atom and four oxygen atoms. It combines with potassium, magnesium, or any of a number of other elements, to form perchlorate salts, or simply, perchlorates. Perchlorates are incredibly soluble. That’s why, on Earth, it’s rare to find large natural deposits of them. Such deposits can exist only in very arid environments, such as Chile’s Atacama Desert. Water no doubt played a role in concentrating those deposits initially, but even a little bit of rain will cause perchlorates to dissolve and wash away. That’s why they’re more commonly found in rivers and lakes. So where there is perchlorate, there is a water story. On Earth. On Mars, it turns out, perchlorates don’t necessarily imply water. In nature perchlorates form photochemically in the atmosphere, and then settle randomly on a planetary surface. No water is involved in their creation. So merely finding perchlorates on Mars doesn’t say anything one way or another about liquid water. Finding a concentration of perchlorates would argue that liquid water had been involved. "If we find a deposit of perchlorate, one can speculate that water had melted at some point and had collected it into an accumulation," says Richard Quinn, a Phoenix researcher with the SETI Institute and NASA Ames Research Center. But Phoenix hasn’t yet found a concentrated deposit of perchlorates. Alternatively, if Phoenix found some sort of perchlorate gradient – say it saw only a small trace of perchlorate in a sample from the surface, but it saw a larger quantity in a second sample from a few inches below the first, at the boundary between the soil and the ice – one could be fairly certain that liquid water was responsible. But Phoenix hasn’t found a gradient, either. Phoenix has two different instruments that can detect perchlorates, but they go about it in different ways. The spacecraft’s Wet Chemistry Lab (WCL) analyzes a soil sample by putting it in a small beaker of water, and then looking to see what dissolves. The beaker’s walls contain some two dozen electrochemical sensors, each of which is sensitive to the presence of a particular ion, perchlorate being one of them. Phoenix’s TEGA (Thermal and Evolved Gas Analyzer) uses a different approach. It heats the sample, in stages, and then "sniffs" at the fumes that burn off at different temperatures. TEGA can’t detect perchlorate directly; instead it records a release of oxygen when the perchlorate breaks down. The temperature at which the oxygen is detected gives a clue to which perchlorates were present in the original sample. Some (but not all) perchlorates also release chlorine gas when heated. WCL has detected perchlorate unambiguously in both of the samples it has tested. The TEGA results are a bit fuzzier. In its first sample, which came from a location several feet away from the WCL samples, TEGA saw a release of oxygen consistent with the presence of perchlorate. But at that point – it was early in the mission, WCL hadn’t found perchlorates yet, and no one was expecting to see them – TEGA didn’t look for a release of chlorine. When TEGA analyzed its second sample, which came from roughly the same location as WCL’s second sample, TEGA was programmed to look primarily for evidence of organic material. As a result, says William Boynton, the science lead for TEGA, "We didn’t see the oxygen." But, he adds, "We also heated the sample differently¡¦. The oxygen-bearing compound, presumably perchlorate, might have been there, but we might have destroyed it when we were looking for organic compounds." TEGA is now examining a third sample, taken from the surface very near the source of the first WCL sample. This time around, Boynton says, "We’re not going to be looking for the organic compounds; we’re going to use the same heating plan that we did on the first sample." And "in addition to looking for oxygen, we’re going to look for chlorine." Even if TEGA confirms the presence of perchlorates, however, that will only be an early step in understanding the role that water has played in the martian arctic. Additional samples will need to be analyzed, both by WCL and by TEGA, to determine whether the perchlorates have simply been blown in by the martian wind, or they have been moved around and concentrated by water. Researchers are actively debating where those samples should come from. "We’ve really got to sit down and think about what are our last two WCL samples going to be, and can we find any gradient," Quinn says. "That’s going to be key to really saying something about water."
If you spend time in my classroom, you will see opportunities for students to express their creativity every day in a variety of ways. In my classroom, we utilize processing activities to compliment input of content. These activities incorporate student’s creativity in ways that allow them to color and draw as well as create and design their own experiments. These creative exercises provide the students with a different medium by which to learn the same material. By allowing students to add their creativeness to a lesson, the teacher is thus allowing their students to form a direct connection with the material. When students can become personally vested in an assignment or a topic, they become more successful in their ability to retain that information for the long-term. In these same activities, you may see children challenged to creatively think as well as solve problems. For example, I posed a scientific question to students and asked them to design an experiment that would answer that question using the scientific method. Processing activities may be used as a means to not only allow students to be creative, but allow them to tap into their creativity in a manner that allows them to gain further insight into the content at hand. No activity performed in the classroom should be done so without ensuring that students will somehow be able to apply that knowledge toward something they’ve learned. Each activity should have a purpose that allows students to gain more insight into the material. Communication, discussion and collaboration most often occurs within group activities or labs within the classroom. Students are often asked to collaborate in activities or labs designed to provide students the opportunity to learn the material in a new manner. Within these activities or labs, students may discuss with their classmates the methods of the experiment or the results themselves. They may also have the opportunity to make clarifications with their classmates on the content or the purpose of the exercise. As a teacher, I could probably do more to teach information literacy as well as media literacy to my students. Being able to think critically about the information we receive is a critical tool that will be continually utilized throughout adulthood. I could provide my students papers on a certain biologically issue in society and ask them to analyze the varying points of view on that particular issue. They can do this through a variety of sources, newspaper articles, interviews online, etc. Regardless of how this is executed, it is imperative that students be able to be critical thinkers and develop the tools for them to be come to conclusions on their own. I could also allow my students more room to utilize technology to research or communicate information. I haven’t implemented any classroom projects or research assignments where students would be required to use technology to complete the assignment. Any technology the students use to access information on a topic, they do so at their own leisure or desire. I try to give my students autonomy whenever possible. I believe that when students feel they can direct themselves or chose how they would like to complete an assignment, they have a greater desire to complete that assignment to the best of their ability. On some level, I always strive to provide this opportunity to my students. Some ways I may do this is to allow students to create their own analogies for how the cell works, to illustrate however they wish a given topic, etc. In this manner and many others, students become leaders of their own learning. Students also take power over their learning in group settings where they can communicate and work with their classmates in order to achieve a specific result. Interacting with their classmates not only provides the students the ability to develop community within the classroom, but they are able to experience what it is like to communicate effectively, work efficiently and interact with others in a team manner. These are all skills that students will continually utilize throughout the rest of their lives regardless of which field they enter.
Homo floresiensis, an ancient species of human nicknamed ‘hobbits’ found on the island of Flores in Indonesia in 2003, has seen its previous theories about its origin ruled out by Australian scientists in a new study published in the Journal of Human Evolution. The species, which stood about 1 metre tall and weighed about 25 kilograms, was known to live in the area as recently as 54,000 years ago. A major theory about their origin held that they descended from human ancestor Homo erectus, shrinking in stature over hundreds of generations. Another theory was that it wasn’t a new species at all but an early human ancestor with some kind of genetic disorder. But the new study from the Australian National University, which analysed characteristics of the hobbit skeletons, found they were more likely descended from hominids in Africa and could be far older than predicted. Professor Colin Groves, who worked on the study, said the species likely evolved from the older Homo habilis.”We think it very unlikely that it was descended from Homo erectus, and in fact its sister species, that is the one it is most closely related to is Homo habilis, which lived in Africa about 1.5 to 2 million years ago.” The study found several points of difference between Homo floresiensis skeletons and those of Homo erectus.Study leader, Dr Debbie Argue said that the structure of the Homo floresiensis jaw was inconsistent with that of Homo erectus. “We looked at whether Homo floresiensis could be descended from Homo erectus,” she said. “Logically, it would be hard to understand how you could have that regression, why would the jaw of Homo erectus evolve back to the primitive condition we see in Homo floresiensis?” Instead, the Homo floresiensis would have come much earlier, over 1.75 million years ago. It is likely that Homo floresiensis was a sister species of Homo habilis, says Dr Argue. The two species would have had an ancestor in common. Homo floresiensis would have evolved in the continent of Africa, and later migrated. Another possibility is that the common ancestor of the two is the one to have been displaced from Africa to somewhere else where it would have undergone evolution. Dr Argue and her team studied 133 different characteristics of the Homo floresiensis skull, jaw, teeth, shoulders, legs and arms, and compared them to all other known hominid species. They found that it’s a long surviving cousin of Homo habilis, an early human ancestor with roots in Africa. None of their tests yielded evidence to support the theory that Homo floresiensis evolved from Homo erectus.Dr Argue says one of the most interesting things about the species is that it lived until about 54,000 years ago which is very recent, evolutionarily speaking.”That anything so archaic looking could have been found to have lived so comparatively recently, is one of the things that was so unexpected about this species,” said Dr Argue. One of those archaic looking features was short legs, which made the arms appear long.”Not as long as, say, a chimpanzee, but way outside the range of modern humans,” said Dr Argue.They also had long feet in comparison to their legs, again, much longer than the range seen in modern humans. Homo floresiensis also had shoulders and a face that shrugged forward, said Dr Argue, a mound of bone in the forehead area that extended around the outside of the eye area, and no chin. “Instead, the jaw slopes backwards, and inside it has a shelf of bone that is below the incisors, compared to our jaw that is vertical inside behind the incisors,” said Dr Argue. Overall, Australian National University researchers said they were 99 per cent sure the species was not descended from Homo erectus, and could totally discount the theory that hobbits were malformed modern humans.
How does gravity work in space? Escape velocity is the speed that an object needs to be traveling to break free of a planet or moon's gravity well and leave it without further propulsion. For example, a spacecraft leaving the surface of Earth needs to be going 7 miles per second, or nearly 25,000 miles per hour to leave without falling back to the surface or falling into orbit. A Delta II rocket blasting off. A large amount of energy is needed to achieve escape velocity. Photo from Jet Propulsion Laboratory's Planetary Missions & Instruments image gallery http://www-b.jpl.nasa.gov/pictures/browse/pmi.html Since escape velocity depends on the mass of the planet or moon that a spacecraft is blasting off of, a spacecraft leaving the moon's surface could go slower than one blasting off of the Earth, because the moon has less gravity than the Earth. On the other hand, the escape velocity for Jupiter would be many times that of Earth's because Jupiter is so huge and has so much gravity. Escape Velocity in Kilometers/ Second |Escape Velocity in Miles/Hour| |Ceres (largest asteroid in the asteroid belt)|| |Sirius B (a white dwarf star)|| One reason that manned missions to other planets are difficult to plan is that a ship would have to take enough fuel into space to blast off of the other planet when the astronauts wanted to go home. The weight of the fuel would make the spaceship so heavy it would be hard to blast it off of Earth! What is gravity? How do we put a spacecraft into orbit? Once a ship is in orbit, do we have to do anything to keep it there? How did DS1 get into space? Could NASA use ion propulsion to put a ship into space? What is mass? Why is it a good idea to launch a ship into orbit from near the equator? Why is mass important? How does a multi-stage rocket like the Delta II work? Why does it take so much energy to launch DS1? Why do mass and distance affect gravity? What's a gravity well?
During WWII, development of nuclear weapons was paramount for many of the world’s top physicists. Each one of these nuclear weapons required a core of plutonium that measured around 3.5 inches in diameter. 2 cores were used in the nuclear bombing of Japan to stop WWII, but there was a third core ready to be used when needed. When the war ended, the core became the main testing subject for physicists as they continued to improve the United States’ nuclear arsenal. This third core was a 14 pound subcritical mass of plutonium that measured 3.5 inches in diameter. It was also responsible for the direct deaths 2 physicists and many more who died years later from cancer, for which earned the mass its nickname, the “demon core.” Check out the video below to watch the information presented in this article. The demon core was designed with a small safety margin, only about 5 percent. This was to ensure that it went off in the event of its use. The 14-lb radioactive sphere consisted of two plutonium-gallium hemispheres and a center ring designed to keep neutron flux from jetting out of the core during the implosion. This design maximized the destructive power of bomb core. On August 21, 1945, the first incident occurred. Physicist Harry Daghlian was performing experiments on the neutron reflectors around the core. He was working alone with only a security guard standing watch about 12 feet away. While moving protective tungsten carbide bricks around the core assembly, Daghlian accidentally dropped one onto the core and due to the low safety factor, the core quickly slipped into supercriticality. Daghlian quickly moved the brick off of the assembly, but it was too late. He received a fatal dose of radiation and died 25 days later from radiation poisoning. The security guard survived but died 33 years later from what was likely radiation-induced Leukemia. After this incident, testing on the core persisted. On May 21, 1946, the lead physicist for the project, Louis Slotin and seven other personnel were in the Los Alamos laboratory conducting experiments on the demon core to determine its critical point. Louis Slotin was slated to leave Los Alamos for other work, so he had begun training physicist Alvin C. Graves to take his post. Slotin was preparing Graves to use the core in the Operation Crossroads nuclear tests scheduled in a month at Bikini Atoll. Image Source: Wikipedia Graves needed to know how to place two half-spheres of beryllium acting as neutron reflectors around the core. This action involved manually lowering the top section onto the core using a small thumb hole. If the two neutron reflectors fell into the wrong position allowing them to close completely, instantaneous formation of a critical mass could occur resulting in a rapid power excursion and the release of lethal doses of radiation. This action was tedious enough as is, but making it worse, Slotin had developed an unapproved protocol of how the process should work. In his process, the physicist would hold the top hemisphere of the neutron reflectors with the thumb of one hand while the other hand held a small screwdriver between the halves to keep them from coming together. On the day of the accident, Slotin’s screwdriver slipped ever so slightly while lowering the reflector and a sudden flash of blue light and heat was released across his entire body. The core became instantaneously supercritical and everyone in the room was hit with an intense burst of neutron radiation lasting just a half second. Luckily, Slotin acted fast and flicked the top reflector on the floor, keeping the reaction from continuing. His positioning also shielded others in the room from more lethal doses of radiation. He received a lethal dose of 1,000 rad neutron and 114 gamma radiation in under one second. He died 9 days later from radiation poisoning. Graves was watching the process over Slotin’s shoulder and was luckily partially shielded by Slotin’s body. He was hospitalized for several weeks but survived the incident. He later died of a heart attack. It is unclear whether the heart attack that killed him was a result of radiation or if it was simply genetics. Along with Graves and Slotin, 6 other people were in the room, all suffering minor injuries with only 1 dying of what was likely radiation-caused leukemia 19 years later. After these 2 incidents, the demon core needed some time for its radioactivity to decline before it could be evaluated for use in testing. Two other cores were used in the Operation Crossroads tests. These tests resulted in unexpected levels of radiation in the testing area which eventually led to the decision to scrap testing altogether. The demon core was soon melted down with its materials recycled for use in other cores.
The molecular structure of acetanilide is C8H9NO. Acetanilide is a stable, though combustible, solid that commonly appears as a gray or white powder. It is also known as N-phenylacetamide, acetanil and Antifebrin.Continue Reading In the 19th century, acetanilide was used to synthesize dyes, rubbers and penicillin. It was also used to produce 4-acetamidobenzenesulfonyl chloride, which is a key compound for manufacturing sulfa drugs. The experimental melting point of acetanilide ranges between 112 to 154 degrees Celsius, based on studies by Alfa Aesar, Jean-Claude Bradley, Oxford University and others. The experimental boiling point of acetanilide ranges between 304 and 305 degrees Celsius.Learn more about Chemical Equations
African Americans have long migrated into, out of, and through Kentucky seeking better lives. Before emancipation, runaway slaves traveled the Underground Railroad through Kentucky. (See KETs Kentuckys Underground RailroadPassage to Freedom for more historical background on this period.) Following the Civil War and freedom, blacks left rural areas to move to more urban environments, attempting to find employment beyond farming. Some of them left the state, heading to Kansas and other places that seemed to offer greater opportunities. But others saw Kentucky itself as that land of opportunity. In the late 1800s, many African Americans from the Deep South moved into the Eastern Kentucky coalfields, seeking jobs as miners. At one time, there were as many as seven primarily black communities in Eastern Kentucky. But the mechanization of coal mining eliminated many jobs, causing miners to look elsewhere for jobs to support themselves and their families. With Jim Crow laws and segregation providing additional incentives for leaving Kentucky, many black Kentucky families left for Cincinnati, Pittsburgh, Detroit, Dayton, Indianapolis, and cities in Canada. But even within the state, African Americans left rural areas for cities. In 1890, 28% of Kentuckys blacks lived in urban areas as defined by the U.S. Census Bureau. By 1900, that number was 35%; and by 1910, it had climbed to 41%. The shift of the black population from rural to urban areas continued throughout the 20th century. By 1960, 71% of Kentuckys black citizens lived in the cities.
Dichloromethane, also known as methylene chloride, is a colourless liquid that has a mild sweet odour, evaporates quickly and does not burn easily. It is widely used as an industrial solvent and as a paint stripper. It can be found in certain aerosol and pesticide products and is used to manufacture photographic film. The chemical may be found in some spray paints, automotive cleaners, and other household products. Methylene chloride does not appear to occur naturally in the environment. It is made from methane gas or wood alcohol. American production grow steadily during the 1970s and early 1980s to reach a peak production of 281.000 tonnes in 1984. Due to a decrease in demand, production dropped to 181.000 tonnes in 1994. European production decreased from an estimated 200.000 tonnes in 1984 to 138.000 tonnes in 1996. Environmental emissions in Europe during the 1990s were estimated to be around 44,6 tonnes a year. Most of the methylene chloride released to the environment results from its use as an end product by various industries and the domestic use of aerosol products and paint removers. Due to its high volatility it is mainly released to the air, and to a lesser extent to water and soil. In the atmosphere it is degraded by sunlight and by reactions with other chemicals and has a half-life between 53 and 127 days. It has a moderate water solubility of 20 g/l and a low tendency to adsorb to particles and sediments. Because of its high volatility dichloromethane is expected to evaporate rapidly from water bodies. It is also degraded in water between 1 and 6 days by reactions with other chemicals or biodegradation by bacteria. Marine fish seem to start dying when exposed to dichloromethane concentrations above 97 mg/l, marine invertebrates at concentrations above 109 mg/l. Marine algae can survive short exposure to dichloromethane concentrations up to 662 mg/l. It is suspected that dichloromethane in the North Sea might reach concentrations up to 1 µg/l in heavily polluted coastal areas, although concentrations in polluted estuaries typically range around 0,1 µg/l. Environmental standards and legislation
Order of Operations and the TI-84 Plus Calculator The order in which the TI-84 Plus calculator performs operations is the standard order that you are used to. Spelled out in detail, here is the order in which the calculator performs operations: The calculator simplifies all expressions surrounded by parentheses. The calculator evaluates all functions that are followed by the argument. These functions supply the first parenthesis in the pair of parentheses that must surround the argument. An example is sin x. When you press [SIN] to access this function, the calculator inserts sin( on-screen. You then enter the argument and press [)]. The calculator evaluates all functions entered after the argument. An example of such a function is the square function. You enter the argument and press [x2] to square it. Evaluating –32 may not give you the expected answer. You think of –3 as being a single, negative number. So when you square it, you expect to get +9. But the calculator gets –9 (as indicated in the first screen). This happens because the normal way to enter –3 into the calculator is by pressing [(-)] — and pressing the [(-)] key is equivalent to multiplying by –1. Thus, in this context, –32 = –1 * 32 = –1 * 9 = –9. To avoid this potentially hazardous problem, always surround negative numbers with parentheses before raising them to a power. The calculator evaluates powers entered using the [^] key and roots entered using the square root function. The square root function is found in the Math menu: You can also enter various roots by using fractional exponents — for example, the cube root of 8 can be entered by pressing The calculator evaluates all multiplication and division problems as it encounters them, proceeding from left to right. The calculator evaluates all addition and subtraction problems as it encounters them, proceeding from left to right.
Did the Constitution and the Federal government provide more or less freedom than the Articles of Confederation? 1 Answer | Add Yours Overall, there was not that much difference between the amount of freedom that was available to Americans after the Constitution was ratified as compared to before. The Constitution, of course, included the Bill of Rights. This codified the rights that were guaranteed and ensured that the federal government would not be allowed to infringe upon them. Therefore, there was no real loss of civil liberties when the Constitution took effect. It can be argued that the Constitution increased economic rights. With the Constitution ratified, contracts could not be abrogated by legislatures and interstate commerce could not be prevented by the states. These clauses of the Constitution increased protections for economic rights. The federal government under the Constitution, then, did not infringe on civil liberties significantly more than previously and there were more protections for economic rights. Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
Scientists have found the secrets of the old ship unearthed in 2010 under the ruins of the Twin Towers. First, the large vessel—buried under 22 feet (6.7 meters) of soil and wreckage—was built around the same time the Declaration of Independence was signed. There's more—but there's also one big mystery left unsolved. By comparing the wood's ring patterns with the historical record, researchers at Columbia's Tree Ring Lab led by Dr Martin-Benito found that the ship was built in a Philadelphia shipyard around 1773. Most importantly, the rings matched samples from Independence Hall—the building where the founding fathers signed both the Declaration of Independence and the Constitution of the United States. On July 2010, archaeologists monitoring excavation at the World Trade Center site (WTC) in Lower Manhattan found the remains of a portion of a ship's hull. Because the date of construction and origin of the timbers were unknown, samples from different parts of the ship were taken for dendrochronological dating and provenancing. After developing a 280-year long floating chronology from 19 samples of the white oak group (Quercus section Leucobalanus), we used 21 oak chronologies from the eastern United States to evaluate absolute dating and provenance. Our results showed the highest agreement between the WTC ship chronology and two chronologies from Philadelphia (r = 0.36; t = 6.4; p < 0.001; n = 280) and eastern Pennsylvania (r = 0.35; t = 6.3; p < 0.001; n = 280). The last ring dates of the seven best-preserved samples suggest trees for the ship were felled in 1773 CE or soon after. Our analyses suggest that all the oak timbers used to build the ship most likely originated from the same location within the Philadelphia region, which supports the hypothesis independently drawn from idiosyncratic aspects of the vessel's construction, that the ship was the product of a small shipyard. Few late-18th Century ships have been found and there is little historical documentation of how vessels of this period were constructed. Therefore, the ship's construction date of 1773 is important in confirming that the hull encountered at the World Trade Center represents a rare and valuable piece of American shipbuilding history. Past analysis of the wood also found burrowing holes produced by a worm plague, which researchers think the ship got in a trip to the Caribbean. Dr Martin-Benito believe that this was the reason why the ship suffered a premature death on the coast of Manhattan. One mystery remains, however: How the hell did the ship end down here? Here's a possible answer: Historians still aren't certain whether the ship sank accidently or if it was purposely submerged to become part of a landfill used to bulk up Lower Manhattan's coastline. Oysters found fixed to the ship's hull suggest it at least languished in the water for some time before being buried by layers of trash and dirt.
This picture illustrates the idea of "atomic mass". The carbon atom (14 C) nucleus on the top has 6 protons plus 8 neutrons, giving it an atomic mass of 14. Tritium (3 H), an isotope of hydrogen, is shown on the bottom. It has 1 proton plus 2 neutrons in its nucleus, giving it an atomic mass of 3. Click on image for full size Original artwork by Windows to the Universe staff (Randy Russell). The atomic number of an atom tells us how many protons are in the nucleus of that atom. Why is that important? The chemical properties of an element are determined by the number of electrons in its atoms, and the number of electrons equals the number of protons in "normal", neutral atoms. Each element has a different number of protons in the nuclei of atoms of that element; so each element has a different atomic number. Hydrogen atoms have 1 proton, and thus an atomic number of 1. Carbon has 6 protons and an atomic number of 6; oxygen has 8 protons and thus and atomic number of 8. The atomic number of uranium is 92! Scientists also use the concept of "atomic mass". Since the nucleus of an atom contains nearly all (more than 99%) of an atom's mass, "atomic mass" is more-or-less a description of the mass in the nucleus. The atomic mass of an atom is essentially a count of the number of neutrons plus the number of protons. Common carbon has 6 protons and 6 neutrons in each carbon atom, so its atomic mass is 12 ( = 6 + 6). Sometimes scientists use the letter "Z" to stand for atomic number and the letter "A" to stand for atomic mass. Most elements have different "versions" with varying numbers of neutrons. The different versions are called isotopes. Carbon, for example, has isotopes with 7 neutrons and with 8, along with the standard 6-neutron variety. Scientists specify which isotope they are talking about by including the atomic mass in the name. Normal carbon is thus carbon-12, while the less common varieties are written as carbon-13 and carbon-14. Remember, however, that the different isotopes of carbon behave almost identically in most chemical reactions, for they share the same atomic number. Shop Windows to the Universe Science Store! Our online store includes issues of NESTA's quarterly journal, The Earth Scientist , full of classroom activities on different topics in Earth and space science, ranging from seismology , rocks and minerals , and Earth system science You might also be interested in: Everything you see around you is made of tiny particles called atoms, but not all atoms are the same. Different combinations of protons , neutrons and electrons make different types of atoms and these...more An element (also called a "chemical element") is a substance made up entirely of atoms having the same atomic number; that is, all of the atoms have the same number of protons. Hydrogen, helium, oxygen,...more "Atomic mass" is a term physicists use to describe the size (mass) of an atom of a specific type. Since the nucleus of an atom contains nearly all (more than 99%) of an atom's mass, "atomic mass" is more-or-less...more Isotopes are different "versions" of a chemical element. All atoms of an element have the same number of protons. For example, all hydrogen atoms have one proton, all carbon atoms have six protons, and...more Carbon-14 is an isotope of the element carbon. All carbon atoms have 6 protons in their nucleus. Most carbon atoms also have 6 neutrons, giving them an atomic mass of 12 ( = 6 protons + 6 neutrons). Carbon-14...more Nitrogen is a chemical element with an atomic number of 7 (it has seven protons in its nucleus). Molecular nitrogen (N2) is a very common chemical compound in which two nitrogen atoms are tightly bound...more Oxygen is a chemical element with an atomic number of 8 (it has eight protons in its nucleus). Oxygen forms a chemical compound (O2) of two atoms which is a colorless gas at normal temperatures and pressures....more
By taking a series of near-atomic resolution snapshots, Cornell University and Harvard Medical School scientists have observed step-by-step how bacteria defend against foreign invaders such as bacteriophage, a virus that infects bacteria. The process they observed uses CRISPR (clustered regularly interspaced short palindromic repeats) sites, where the cell’s DNA can be snipped to insert additional DNA. Biologists use CRISPR for genetic engineering experiments, but cells may have evolved the mechanism as part of a defense system. The cell uses these locations to store molecular memories of invaders so that they can be selectively eradicated at the next encounter. ”The bug’s immunity system works just as efficiently as ours, except our system functions at the protein recognition level, whereas CRISPR works at the nucleic acid recognition level,” explained Ailong Ke, professor of molecular biology and genetics. Upon first encounter, the bacteria insert a bit of an invader’s DNA into its own genome at the CRISPR location. When needed, an RNA transcript of the stored DNA, called guide RNA, can be assembled with other proteins into a complex called Cascade (CRISPR Associated Complex for Antiviral Defense). The system is so efficient and precise that researchers have thought of ways to re-tool it for genome editing applications, to introduce changes at precise locations of DNA. ”A CRISPR revolution is sweeping through biology as we speak,” said Ke. In previous research, Ke had defined the function of the protein-RNA complexes involved in the process and used the X-ray crystallography facilities of the Cornell High Energy Synchrotron Source (CHESS) to determine their structure. Yibei Xiao, a postdoctoral researcher in Ke’s lab worked out the entire immunity process, step by step. ”The next step is to capture structural snapshots of these steps, to produce a high-definition movie of what’s going on,” said Ke. Ke collaborated with Maofu Liao, assistant professor of cell biology at Harvard Medical School, who is an expert in using a cryo-electron microscope to determine high-resolution structures of macromolecules frozen in a thin layer of ice. Working with the bacteria Thermobifida fusca, used in fermentation, Ke’s lab prepared samples representing distinct stages of the immune response. Liao and his postdoctoral researcher Min Luo froze these samples and took high-resolution snapshots at each step. The study focused on a particular version of CRISPR-related defense known as Type I-E. ”We knew roughly how it works, but without the structures we didn’t have the details,” Ke said. ”A picture is worth a thousand words.” ”Scientists hypothesized that these states existed but they were lacking the visual proof of their existence,” said Luo. ”Now, seeing really is believing.” The findings, published June 29 in the journal Cell, provide structural data that can improve the efficiency and accuracy of biomedical CRISPR operations. Aspects of this defense mechanism – particularly how it searches for its DNA targets – were unclear and have raised concerns about unintended off-target effects and the safety of using the CRISPR-Cas mechanism for treating human diseases. ”To solve problems of specificity, we need to understand every step of CRISPR complex formation,” said Liao. ”To apply CRISPR in human medicine, we must be sure the system does not accidentally target the wrong genes,” said Ke. ”Our argument is that the Type I system is potentially more accurate than CRISPR-Cas9, because it checks a longer stretch of sequence before action, and the system divides target searching and degradation into two steps, with built-in safety features in between.” Type I CRISPR so far offers limited utility for precision gene editing, but it may be used as a tool to combat antibiotic-resistant strains of bacteria. Ke and Xiao co-authored another paper in the same issue of Cell, with Ilya Finkelstein, assistant professor of molecular biosciences at the University of Texas at Austin, to characterize how Cascade searches for targets at the single molecule level. Explore further:New study reveals key steps in CRISPR-Cas3 function at near-atomic resolution Provided by:Cornell University
Analog-to-digital converters are among the most widely used devices for data acquisition. Digital computers use binary values, but in physical world everything is analog. Therefore, we need an analog-to-digital converter to translate these analog signals to digital signals. An ADC has n-bit resolution where n can be 8,10,12,16 etc. The ADC chips are either parallel or serial. Parallel ADC has 8 or more pins dedicated to bring out the binary data. ADC0808 is such a parallel ADC with 8-bit resolution. ADC0808 has 8 input channels, i.e., it can take eight analog signals. To select these input channels, three select pins are to be configured. In this circuit the microcontroller AT89C51 is used to send the control and enabling signals to ADC. In ADC Vref (+) (pin12) and Vref (-) (pin16) are used to set the reference voltage. If Vref (-) is GND and Vref (+) = 5V, the step size is 5V/256=19.53mV. ADC0808 has 8 input pins IN0-IN7 (pins 1-5 & 26-28). To select an input pin, there are three selector pins A, B and C (pin 25, 24 & 23, respectively). ALE (Address latch enable, pin22) is given a low to high pulse to latch in the address. SC (Start conversion, pin6) instructs the ADC to start the conversion. When a low to high pulse is given to this pin ADC starts converting the data. EOC (end of conversion, pin7) is an output pin and goes low when the conversion is complete and ready to be picked up, and OE(output enable, pin9) is given a low to high pulse to bring the converted data from the internal register of ADC to the output pins. Pin11 is Vcc and pin13 is GND. Here we are using external clock for clock input (pin 10). The connection of the ADC with the microcontroller can be seen on the circuit diagram. ALE (pin22) of ADC is connected to P1.0 of controller AT89C51. Selector pins A, B, C (pins 25, 24 & 23) of ADC are connected to P1.4, P1.5 & P1.6 pins of microcontroller, respectively. SC (pin6) of ADC is connected to P1.1 of controller. EOC (pin7) of ADC is connected to P1.2 of microcontroller and OE (pin9) of ADC is connected to P1.3 of microcontroller. Output of ADC goes to port P0 (pins 32-39) of controller. AT89C51 The output is sent to the port P2 (pins 21-28) of controller which is connected to eight LEDs. The program continuously scans the input of ADC and displays the output on the output port P2. By varying the input of ADC, output of ADC changes and the change is reflected in the glowing pattern of LEDs connected to the port. To provide clock input to the ADC, Timer0 is used in interrupt enabled mode to generate a clock of frequency 500 KHz. To enable the Timer0 in interrupt enable mode, the register IE is loaded with the value 0x82. (Refer Timer programming with 8051) Every time the Timer completes the counting, pin P1.7 toggles its state.
|Look up topos in Wiktionary, the free dictionary.| Topos (τόπος, Greek 'place' from tópos koinós, common place; pl. topoi), in Latin locus (from locus communis), referred in the context of classical Greek rhetoric to a standardised method of constructing or treating an argument. [See topoi in classical rhetoric.] The technical term topos is variously translated as "topic", "line of argument" or "commonplace." Ernst Robert Curtius expanded this concept in studying topoi as "commonplaces": reworkings of traditional material, particularly the descriptions of standardised settings, but extended to almost any literary meme. For example, Curtius notes the common observation in the ancient classical world that “all must die” as a topos in consolatory oratory; that is, one facing one’s own death often stops to reflect that greater men from the past died as well. A slightly different kind of topos noted by Curtius is the invocation of nature (sky, seas, animals, etc.) for various rhetorical purposes, such as witnessing to an oath, rejoicing or praising God, or sharing in the mourning of the speaker. Critics have traced the use and re-use of such topoi from the literature of classical antiquity to the 18th century and beyond into postmodern literature. This is illustrated in the study of archetypal heroes and in the theory of The Hero With A Thousand Faces (1949), a book written by modern theorist Joseph Campbell. For example, oral histories passed down from pre-historic societies contain literary aspects, characters, or settings that appear again and again in stories from ancient civilizations, religious texts, and even more modern stories. The biblical creation myths and "the flood" are two examples, as they are repeated in other civilizations' earliest texts such as the Epic of Gilgamesh or deluge myth), and are seen again and again in historical texts and references. - Ernst Robert Curtius, European Literature and the Latin Middle Ages, trans. from German by Willard R. Trask (New York, NY: Pantheon Books, 1953), 80. - Curtius, European Literature and the Latin Middle Ages, 92–94. - Branham, R. Bracht; Kinney, Daniel (1997). Introduction to Petronius Satyrica.
Science Safari: Energy Resources This Science Safari: Energy Resources lesson plan also includes: - Join to access all included materials Students discover how scientific methods are integral to the creation of energy. In this energy resources lesson, students follow the provided procedures to learn how science impacts energy production. 9 Views 56 Downloads Energy Resources: Where Are They and How Do We Get Them? Future energy engineers visit several stations, each one dedicated to a different alternative source of energy. They describe how solar energy is converted into other forms of energy, the patterns of distribution of energy resources in... 5th - 8th Science Generating Ideas for Electricity Generation Electricity is an integral part of our daily lives, but many energy sources are damaging the environment. Young engineers read about innovations in alternative energy sources, then work in groups to design and build a model of a system... 3rd - 8th Science CCSS: Adaptable PS3D - Energy in Chemical Processes and Everyday Life Need to take your teaching of energy concepts to another level? Check out a strategic video that examines standard PS3D in the Next Generation Science Standards. Filled with examples, scenarios, and connections to the life sciences, the... 7 mins 4th - 12th Science Racing with the Sun: Creating a Solar Car As the cost of oil continues to rise and the environmental impacts of emissions become more widespread, the demand for alternative energy sources for cars is huge. In an engaging and challenging week-long lesson, your upper-elementary or... 4th - 8th Math CCSS: Adaptable
Discuss whether you see a way around exclusivism, pluralism, and inclusivism that might still keep integrity of each particular religion in place. Discuss how religious language might or might not play a role in your conclusion. Exclusivism (the doctrine that only one religion is “true”) is the foundation of many religions. If Scripture is correct, “Out of the abundance of the heart, the mouth speaks.” Whatever a person believes passionately will come out of his or her mouth. People who believe that their belief system has the only means to salvation; if they believe that souls depend on the truth of that system, that belief will be shared with others. They can fully respect the dignity of other people, and understand the depth of the beliefs of others; they want to share the truth so that all will come to salvation. One can “witness” or “evangelize” by simply stating one’s belief, while allowing others to share their own beliefs in the same way. Inclusivism may be compatible with exclusivism, in that (in Christianity, for example) inclusivism maintains that Jesus Christ is the only means of salvation, but salvation (through Christ) can be obtained without a specific belief in Christ for salvation, but through the “general revelation” of nature. People who embrace inclusivism have an understanding that people who have never heard the gospel of Christ, may (through general revelation) may come to a saving faith without ever hearing of Christ). Pluralism maintains that all religions are equally valid and that any religion may bring a person to salvation. This cannot be compatible with exclusivism (within a person) but may coincide with inclusivism. Once cannot simultaneously believe that there is only one means of salvation and believe that there are many ways to salvation. Within a group of people, discussions can take place that allow sharing and debates of beliefs. These discussions can get passionate and even heated at times, and they depend on the ability of others to present their convictions and listen to other people and maintain respect and civility for the other people, even if they do not respect the other religion. If respect and civility are not present, the “doctrine of ‘just shut up’” might come into play. When two groups of people of opposing faiths meet together, there may be certain “ecumenical” guidelines in place. For instance, immediately after September 11, 2001, a church invited a local Muslim congregation to a potluck on Sunday afternoon. The agreement was that the religious leaders would not “proselytize.” Before eating, the Christian pastor prayed to the “God of Abraham,” in respect of the beliefs of the Muslims present. The mullah began his prayer with “There is one God and Muhammad is his prophet.” Both maintained respect for the beliefs of the other group of believers, but only one presented an understanding of “inclusivism.” At times, it appears that “let’s discuss our beliefs” turns into “just let me talk and eventually you’ll agree with me.” And “let’s compromise” becomes “just let go of your convictions and come to my understanding and we’ll be fine.” “Religions for Peace” is an interfaith organization that brings a broad range of religions together to work toward social justice and other global issues. In this structure, the integrity of each religion can be maintained because the people do not work for evangelism, they work to prevent global climate change or some other such thing. At the end of the day, people have worked toward their social goals, have not had their beliefs encroached upon and they can get up and attend their own church without having their beliefs challenged. If, however, within that organization, people begin to preach their “truth” to the people they are trying to help, tension becomes unavoidable, since if one is “right”, the others must be “wrong” (even if the “right truth” is that exclusivism is wrong). The integrity of the opposing religions cannot be maintained because the “agreement” of ecumenicalism” is not being respected. The “language” of religion may or may not play a part. At times it seems as though two people may use the same words, but mean very different things. Even among Christians, one can ask “What is ‘the Gospel?’” and hear different answers: “the truth”, “the Good News”, “We sinned, Jesus died” or even “Matthew, Mark, Luke, and John.” This leads to frustration because the conversation may appear fruitful, until one party or another realizes that “justification” has very different definitions to the people involved. Only through discussion and honest questioning (and listening) does understanding occur. "Those who stand for nothing, fall for anything" - Alexander Hamilton. The ability of a person or group of people to maintain the integrity of their religion includes knowing the beliefs of the religion and having the conviction that the belief is correct. It means little to belong to a religion if one chooses to remain unaware of the doctrines and (even if aware) is willing to sacrifice those believes on the altar of “can’t we all just get along-ism.” One can remain firm in one’s beliefs without sacrificing civility. The integrity of a religion can be maintained…unless it is weakened by those within.
Dung beetle, (subfamily Scarabaeinae), also called dung chafer or tumblebug, any of a group of beetles in the family Scarabaeidae (insect order Coleoptera) that forms manure into a ball using its scooperlike head and paddle-shaped antennae. In some species the ball of manure can be as large as an apple. In the early part of the summer the dung beetle buries itself and the ball and feeds on it. Later in the season the female deposits eggs in balls of dung, on which the larvae will later feed. Dung beetles are usually round with short wing covers (elytra) that expose the end of the abdomen. They vary in size from 5 to 30 mm (0.2 to about 1.2 inches) and are usually dark in colour, although some have a metallic lustre. In many species, there is a long, curved horn on the top of the male’s head. Dung beetles can eat more than their own weight in 24 hours and are considered helpful to humans because they speed up the process of converting manure to substances usable by other organisms. The sacred scarab of ancient Egypt (Scarabaeus sacer), found in many paintings and jewelry, is a dung beetle. Egyptian cosmogony includes the scarab beetle rolling its ball of dung with the ball representing the Earth and the beetle the Sun. The six legs, each with five segments (total 30), represent the 30 days of each month (actually, this species has only four segments per leg, but closely related ones do have five). An interesting member of this subfamily is Aulacopris maximus, one of the largest dung beetle species found in Australia, reaching as many as 28 mm (1.1 inches) in length. The Indian scarabs Heliocopris and certain Catharsius species make very large manure balls and cover them with a layer of clay, which becomes so hard when dry that the balls were once thought to be old stone cannonballs. Members of other scarab subfamilies (Aphodiinae and Geotrupinae) are also called dung beetles. However, instead of forming balls, they excavate a chamber under a pile of dung that is used during feeding or for depositing eggs. The aphodian dung beetle is small (4 to 6 mm, or about 1/5 inch) and usually black with yellow wing covers. The earth-boring dung beetle (e.g., Geotrupes) is about 14 to 20 mm (about 1/2 to 3/4 inch) long and brown or black in colour. Geotrupes stercorarius, known as the dor beetle, is a common European dung beetle.
|Portugal Table of Contents In 1816 Maria I, after twenty-four years of insanity, died and the prince regent was proclaimed João VI (r.1816-26). The new king, who had acquired a court and government in Brazil and a following among the Brazilians, did not immediately return to Portugal, and liberals continued to agitate against the monarchy. In May 1817, General Gomes Freire Andrade was arrested on treason charges and hanged, as were eleven alleged accomplices. Beresford, who was still commander in chief of the Portuguese army, was popularly blamed for the harshness of the sentences, which aggravated unrest in the country. The most active center of Portuguese liberalism was Porto, where the Sinédrio was situated and quickly gaining adherents. In March 1820, Beresford went to Brazil to persuade the king to return to the throne. His departure allowed the influence of the liberals to grow within the army, which had emerged from the Peninsular Wars as Portugal's strongest institution. On August 24, 1820, regiments in Porto revolted and established a provisional junta that assumed the government of Portugal until a cortes could be convoked to write a constitution. The regency was bypassed because it was unable to cope with Portugal's financial crisis, and Beresford was not allowed to enter the country when he returned from Brazil. In December 1820, indirect elections were held for a constitutional cortes, which convened in January 1821. The deputies were mostly constitutional monarchists. They elected a regency to replace the provisional junta, abolished seigniorial rights and the Inquisition, and, on September 23, approved a constitution. At the same time, João VI decided to return to Portugal, leaving his son Pedro in Brazil. Upon his arrival in Lisbon, João swore an oath to uphold the new constitution. After his departure from Brazil, Brazilian liberals, inspired by the independence of the United States and the independence struggles in the neighboring Spanish colonies, began to agitate for freedom from Portugal. Brazilian independence was proclaimed on October 12, 1822, with Pedro as constitutional emperor. The constitution of 1822 installed a constitutional monarchy in Portugal. It declared that sovereignty rested with the nation and established three branches of government in classical liberal fashion. Legislative power was exercised by a directly elected, unicameral Chamber of Deputies; executive power was vested in the king and his secretaries of state; and judicial power was in the hands of the courts. The king and his secretaries of state had no representation in the chamber and no power to dissolve it. Two broad divisions emerged in Portuguese society over the issue of the constitution. On the one hand were the liberals who defended it, and on the other, the royalists who favored absolutism. The first reaction to the new liberal regime surfaced in February 1823 in Trás-os-Montes where the count of Amarante, a leading absolutist, led an insurrection. Later, in May, Amarante once again sounded the call to arms, and an infantry regiment rose at Vila Franca de Xira, just north of Lisbon. Some of the Lisbon garrison joined the absolutists, as did the king's younger brother, Miguel, who had refused to swear to uphold the constitution. After the Vilafrancada, as the uprising is known, Miguel was made generalíssimo of the army. In April 1824, Miguel led a new revolt--the Abrilada--which sought to restore absolutism. João, supported by Beresford, who had been allowed to return to Portugal, dismissed Miguel from his post as generalíssimo and exiled him to France. The constitution of 1822 was suspended, and Portugal was governed under João's moderate absolutism until he died in 1826. Source: U.S. Library of Congress
At standard gravity its length is 0.994 m (39.1 in). This length was determined (in toises) by Marin Mersenne in 1644. In 1660, the Royal Society proposed that it be the standard unit of length. In 1675 Tito Livio Burattini proposed that it be named the meter. In 1790, one year before the metre was ultimately based on a quadrant of the Earth, Talleyrand proposed that the metre be the length of the seconds pendulum at a latitude of 45°. This option, with one-fifth of this length defining the foot, was also considered by Thomas Jefferson and others for redefining the yard in the United States shortly after gaining independence from the British Crown. - Seconds pendulum - Cochrane, Rexmond (1966). "Appendix B: The metric system in the United States". Measures for progress: a history of the National Bureau of Standards. U.S. Department of Commerce. p. 532. - Long Case Clock: Pendulum |This standards- or measurement-related article is a stub. You can help Wikipedia by expanding it.|
A video about the Cubic Unit Cells and also a brief description of Coordination Numbers. Video by René Van Wyk. Reviews the three types of unit cells in the cubic crystal system. To describe images using words using the Visual Literacy strategy. To use prior knowledge of root words to learn new vocabulary words using Etymology. A study guide on crystal systems and unit cells. Explains the concept of the unit cell, the various types of unit cells, and the properties of each of these types of unit cells. These flashcards are here to help you better understand high school chemistry.